Oct 10 05:04:48 np0005479822 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 10 05:04:48 np0005479822 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 10 05:04:48 np0005479822 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 05:04:48 np0005479822 kernel: BIOS-provided physical RAM map:
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 10 05:04:48 np0005479822 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 10 05:04:48 np0005479822 kernel: NX (Execute Disable) protection: active
Oct 10 05:04:48 np0005479822 kernel: APIC: Static calls initialized
Oct 10 05:04:48 np0005479822 kernel: SMBIOS 2.8 present.
Oct 10 05:04:48 np0005479822 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 10 05:04:48 np0005479822 kernel: Hypervisor detected: KVM
Oct 10 05:04:48 np0005479822 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 10 05:04:48 np0005479822 kernel: kvm-clock: using sched offset of 4352091932 cycles
Oct 10 05:04:48 np0005479822 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 10 05:04:48 np0005479822 kernel: tsc: Detected 2799.998 MHz processor
Oct 10 05:04:48 np0005479822 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 10 05:04:48 np0005479822 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 10 05:04:48 np0005479822 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 10 05:04:48 np0005479822 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 10 05:04:48 np0005479822 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 10 05:04:48 np0005479822 kernel: Using GB pages for direct mapping
Oct 10 05:04:48 np0005479822 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 10 05:04:48 np0005479822 kernel: ACPI: Early table checksum verification disabled
Oct 10 05:04:48 np0005479822 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 10 05:04:48 np0005479822 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 05:04:48 np0005479822 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 05:04:48 np0005479822 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 05:04:48 np0005479822 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 10 05:04:48 np0005479822 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 05:04:48 np0005479822 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 10 05:04:48 np0005479822 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 10 05:04:48 np0005479822 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 10 05:04:48 np0005479822 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 10 05:04:48 np0005479822 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 10 05:04:48 np0005479822 kernel: No NUMA configuration found
Oct 10 05:04:48 np0005479822 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 10 05:04:48 np0005479822 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 10 05:04:48 np0005479822 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 10 05:04:48 np0005479822 kernel: Zone ranges:
Oct 10 05:04:48 np0005479822 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 10 05:04:48 np0005479822 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 10 05:04:48 np0005479822 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 05:04:48 np0005479822 kernel:  Device   empty
Oct 10 05:04:48 np0005479822 kernel: Movable zone start for each node
Oct 10 05:04:48 np0005479822 kernel: Early memory node ranges
Oct 10 05:04:48 np0005479822 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 10 05:04:48 np0005479822 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 10 05:04:48 np0005479822 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 05:04:48 np0005479822 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 10 05:04:48 np0005479822 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 10 05:04:48 np0005479822 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 10 05:04:48 np0005479822 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 10 05:04:48 np0005479822 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 10 05:04:48 np0005479822 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 10 05:04:48 np0005479822 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 10 05:04:48 np0005479822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 10 05:04:48 np0005479822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 10 05:04:48 np0005479822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 10 05:04:48 np0005479822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 10 05:04:48 np0005479822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 10 05:04:48 np0005479822 kernel: TSC deadline timer available
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Max. logical packages:   8
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Max. logical dies:       8
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Max. dies per package:   1
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Max. threads per core:   1
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Num. cores per package:     1
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Num. threads per package:   1
Oct 10 05:04:48 np0005479822 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 10 05:04:48 np0005479822 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 10 05:04:48 np0005479822 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 10 05:04:48 np0005479822 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 10 05:04:48 np0005479822 kernel: Booting paravirtualized kernel on KVM
Oct 10 05:04:48 np0005479822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 10 05:04:48 np0005479822 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 10 05:04:48 np0005479822 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 10 05:04:48 np0005479822 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 10 05:04:48 np0005479822 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 05:04:48 np0005479822 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 10 05:04:48 np0005479822 kernel: random: crng init done
Oct 10 05:04:48 np0005479822 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: Fallback order for Node 0: 0 
Oct 10 05:04:48 np0005479822 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 10 05:04:48 np0005479822 kernel: Policy zone: Normal
Oct 10 05:04:48 np0005479822 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 10 05:04:48 np0005479822 kernel: software IO TLB: area num 8.
Oct 10 05:04:48 np0005479822 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 10 05:04:48 np0005479822 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 10 05:04:48 np0005479822 kernel: ftrace: allocated 193 pages with 3 groups
Oct 10 05:04:48 np0005479822 kernel: Dynamic Preempt: voluntary
Oct 10 05:04:48 np0005479822 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 10 05:04:48 np0005479822 kernel: rcu: #011RCU event tracing is enabled.
Oct 10 05:04:48 np0005479822 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 10 05:04:48 np0005479822 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 10 05:04:48 np0005479822 kernel: #011Rude variant of Tasks RCU enabled.
Oct 10 05:04:48 np0005479822 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 10 05:04:48 np0005479822 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 10 05:04:48 np0005479822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 10 05:04:48 np0005479822 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 05:04:48 np0005479822 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 05:04:48 np0005479822 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 05:04:48 np0005479822 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 10 05:04:48 np0005479822 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 10 05:04:48 np0005479822 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 10 05:04:48 np0005479822 kernel: Console: colour VGA+ 80x25
Oct 10 05:04:48 np0005479822 kernel: printk: console [ttyS0] enabled
Oct 10 05:04:48 np0005479822 kernel: ACPI: Core revision 20230331
Oct 10 05:04:48 np0005479822 kernel: APIC: Switch to symmetric I/O mode setup
Oct 10 05:04:48 np0005479822 kernel: x2apic enabled
Oct 10 05:04:48 np0005479822 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 10 05:04:48 np0005479822 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 10 05:04:48 np0005479822 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 10 05:04:48 np0005479822 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 10 05:04:48 np0005479822 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 10 05:04:48 np0005479822 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 10 05:04:48 np0005479822 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 10 05:04:48 np0005479822 kernel: Spectre V2 : Mitigation: Retpolines
Oct 10 05:04:48 np0005479822 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 10 05:04:48 np0005479822 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 10 05:04:48 np0005479822 kernel: RETBleed: Mitigation: untrained return thunk
Oct 10 05:04:48 np0005479822 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 10 05:04:48 np0005479822 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 10 05:04:48 np0005479822 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 10 05:04:48 np0005479822 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 10 05:04:48 np0005479822 kernel: x86/bugs: return thunk changed
Oct 10 05:04:48 np0005479822 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 10 05:04:48 np0005479822 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 10 05:04:48 np0005479822 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 10 05:04:48 np0005479822 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 10 05:04:48 np0005479822 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 10 05:04:48 np0005479822 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 10 05:04:48 np0005479822 kernel: Freeing SMP alternatives memory: 40K
Oct 10 05:04:48 np0005479822 kernel: pid_max: default: 32768 minimum: 301
Oct 10 05:04:48 np0005479822 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 10 05:04:48 np0005479822 kernel: landlock: Up and running.
Oct 10 05:04:48 np0005479822 kernel: Yama: becoming mindful.
Oct 10 05:04:48 np0005479822 kernel: SELinux:  Initializing.
Oct 10 05:04:48 np0005479822 kernel: LSM support for eBPF active
Oct 10 05:04:48 np0005479822 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 10 05:04:48 np0005479822 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 10 05:04:48 np0005479822 kernel: ... version:                0
Oct 10 05:04:48 np0005479822 kernel: ... bit width:              48
Oct 10 05:04:48 np0005479822 kernel: ... generic registers:      6
Oct 10 05:04:48 np0005479822 kernel: ... value mask:             0000ffffffffffff
Oct 10 05:04:48 np0005479822 kernel: ... max period:             00007fffffffffff
Oct 10 05:04:48 np0005479822 kernel: ... fixed-purpose events:   0
Oct 10 05:04:48 np0005479822 kernel: ... event mask:             000000000000003f
Oct 10 05:04:48 np0005479822 kernel: signal: max sigframe size: 1776
Oct 10 05:04:48 np0005479822 kernel: rcu: Hierarchical SRCU implementation.
Oct 10 05:04:48 np0005479822 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 10 05:04:48 np0005479822 kernel: smp: Bringing up secondary CPUs ...
Oct 10 05:04:48 np0005479822 kernel: smpboot: x86: Booting SMP configuration:
Oct 10 05:04:48 np0005479822 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 10 05:04:48 np0005479822 kernel: smp: Brought up 1 node, 8 CPUs
Oct 10 05:04:48 np0005479822 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 10 05:04:48 np0005479822 kernel: node 0 deferred pages initialised in 9ms
Oct 10 05:04:48 np0005479822 kernel: Memory: 7765872K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 10 05:04:48 np0005479822 kernel: devtmpfs: initialized
Oct 10 05:04:48 np0005479822 kernel: x86/mm: Memory block size: 128MB
Oct 10 05:04:48 np0005479822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 10 05:04:48 np0005479822 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: pinctrl core: initialized pinctrl subsystem
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 10 05:04:48 np0005479822 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 10 05:04:48 np0005479822 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 10 05:04:48 np0005479822 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 10 05:04:48 np0005479822 kernel: audit: initializing netlink subsys (disabled)
Oct 10 05:04:48 np0005479822 kernel: audit: type=2000 audit(1760087086.473:1): state=initialized audit_enabled=0 res=1
Oct 10 05:04:48 np0005479822 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 10 05:04:48 np0005479822 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 10 05:04:48 np0005479822 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 10 05:04:48 np0005479822 kernel: cpuidle: using governor menu
Oct 10 05:04:48 np0005479822 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 10 05:04:48 np0005479822 kernel: PCI: Using configuration type 1 for base access
Oct 10 05:04:48 np0005479822 kernel: PCI: Using configuration type 1 for extended access
Oct 10 05:04:48 np0005479822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 10 05:04:48 np0005479822 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 10 05:04:48 np0005479822 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 10 05:04:48 np0005479822 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 10 05:04:48 np0005479822 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 10 05:04:48 np0005479822 kernel: Demotion targets for Node 0: null
Oct 10 05:04:48 np0005479822 kernel: cryptd: max_cpu_qlen set to 1000
Oct 10 05:04:48 np0005479822 kernel: ACPI: Added _OSI(Module Device)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Added _OSI(Processor Device)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 10 05:04:48 np0005479822 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 10 05:04:48 np0005479822 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 10 05:04:48 np0005479822 kernel: ACPI: Interpreter enabled
Oct 10 05:04:48 np0005479822 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 10 05:04:48 np0005479822 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 10 05:04:48 np0005479822 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 10 05:04:48 np0005479822 kernel: PCI: Using E820 reservations for host bridge windows
Oct 10 05:04:48 np0005479822 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 10 05:04:48 np0005479822 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [3] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [4] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [5] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [6] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [7] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [8] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [9] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [10] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [11] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [12] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [13] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [14] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [15] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [16] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [17] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [18] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [19] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [20] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [21] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [22] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [23] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [24] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [25] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [26] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [27] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [28] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [29] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [30] registered
Oct 10 05:04:48 np0005479822 kernel: acpiphp: Slot [31] registered
Oct 10 05:04:48 np0005479822 kernel: PCI host bridge to bus 0000:00
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 10 05:04:48 np0005479822 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 10 05:04:48 np0005479822 kernel: iommu: Default domain type: Translated
Oct 10 05:04:48 np0005479822 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 10 05:04:48 np0005479822 kernel: SCSI subsystem initialized
Oct 10 05:04:48 np0005479822 kernel: ACPI: bus type USB registered
Oct 10 05:04:48 np0005479822 kernel: usbcore: registered new interface driver usbfs
Oct 10 05:04:48 np0005479822 kernel: usbcore: registered new interface driver hub
Oct 10 05:04:48 np0005479822 kernel: usbcore: registered new device driver usb
Oct 10 05:04:48 np0005479822 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 10 05:04:48 np0005479822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 10 05:04:48 np0005479822 kernel: PTP clock support registered
Oct 10 05:04:48 np0005479822 kernel: EDAC MC: Ver: 3.0.0
Oct 10 05:04:48 np0005479822 kernel: NetLabel: Initializing
Oct 10 05:04:48 np0005479822 kernel: NetLabel:  domain hash size = 128
Oct 10 05:04:48 np0005479822 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 10 05:04:48 np0005479822 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 10 05:04:48 np0005479822 kernel: PCI: Using ACPI for IRQ routing
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 10 05:04:48 np0005479822 kernel: vgaarb: loaded
Oct 10 05:04:48 np0005479822 kernel: clocksource: Switched to clocksource kvm-clock
Oct 10 05:04:48 np0005479822 kernel: VFS: Disk quotas dquot_6.6.0
Oct 10 05:04:48 np0005479822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 10 05:04:48 np0005479822 kernel: pnp: PnP ACPI init
Oct 10 05:04:48 np0005479822 kernel: pnp: PnP ACPI: found 5 devices
Oct 10 05:04:48 np0005479822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_INET protocol family
Oct 10 05:04:48 np0005479822 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 10 05:04:48 np0005479822 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_XDP protocol family
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 10 05:04:48 np0005479822 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 10 05:04:48 np0005479822 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 10 05:04:48 np0005479822 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73094 usecs
Oct 10 05:04:48 np0005479822 kernel: PCI: CLS 0 bytes, default 64
Oct 10 05:04:48 np0005479822 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 10 05:04:48 np0005479822 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 10 05:04:48 np0005479822 kernel: ACPI: bus type thunderbolt registered
Oct 10 05:04:48 np0005479822 kernel: Trying to unpack rootfs image as initramfs...
Oct 10 05:04:48 np0005479822 kernel: Initialise system trusted keyrings
Oct 10 05:04:48 np0005479822 kernel: Key type blacklist registered
Oct 10 05:04:48 np0005479822 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 10 05:04:48 np0005479822 kernel: zbud: loaded
Oct 10 05:04:48 np0005479822 kernel: integrity: Platform Keyring initialized
Oct 10 05:04:48 np0005479822 kernel: integrity: Machine keyring initialized
Oct 10 05:04:48 np0005479822 kernel: Freeing initrd memory: 85808K
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_ALG protocol family
Oct 10 05:04:48 np0005479822 kernel: xor: automatically using best checksumming function   avx       
Oct 10 05:04:48 np0005479822 kernel: Key type asymmetric registered
Oct 10 05:04:48 np0005479822 kernel: Asymmetric key parser 'x509' registered
Oct 10 05:04:48 np0005479822 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 10 05:04:48 np0005479822 kernel: io scheduler mq-deadline registered
Oct 10 05:04:48 np0005479822 kernel: io scheduler kyber registered
Oct 10 05:04:48 np0005479822 kernel: io scheduler bfq registered
Oct 10 05:04:48 np0005479822 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 10 05:04:48 np0005479822 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 10 05:04:48 np0005479822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 10 05:04:48 np0005479822 kernel: ACPI: button: Power Button [PWRF]
Oct 10 05:04:48 np0005479822 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 10 05:04:48 np0005479822 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 10 05:04:48 np0005479822 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 10 05:04:48 np0005479822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 10 05:04:48 np0005479822 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 10 05:04:48 np0005479822 kernel: Non-volatile memory driver v1.3
Oct 10 05:04:48 np0005479822 kernel: rdac: device handler registered
Oct 10 05:04:48 np0005479822 kernel: hp_sw: device handler registered
Oct 10 05:04:48 np0005479822 kernel: emc: device handler registered
Oct 10 05:04:48 np0005479822 kernel: alua: device handler registered
Oct 10 05:04:48 np0005479822 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 10 05:04:48 np0005479822 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 10 05:04:48 np0005479822 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 10 05:04:48 np0005479822 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 10 05:04:48 np0005479822 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 10 05:04:48 np0005479822 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 10 05:04:48 np0005479822 kernel: usb usb1: Product: UHCI Host Controller
Oct 10 05:04:48 np0005479822 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 10 05:04:48 np0005479822 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 10 05:04:48 np0005479822 kernel: hub 1-0:1.0: USB hub found
Oct 10 05:04:48 np0005479822 kernel: hub 1-0:1.0: 2 ports detected
Oct 10 05:04:48 np0005479822 kernel: usbcore: registered new interface driver usbserial_generic
Oct 10 05:04:48 np0005479822 kernel: usbserial: USB Serial support registered for generic
Oct 10 05:04:48 np0005479822 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 10 05:04:48 np0005479822 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 10 05:04:48 np0005479822 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 10 05:04:48 np0005479822 kernel: mousedev: PS/2 mouse device common for all mice
Oct 10 05:04:48 np0005479822 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 10 05:04:48 np0005479822 kernel: rtc_cmos 00:04: registered as rtc0
Oct 10 05:04:48 np0005479822 kernel: rtc_cmos 00:04: setting system clock to 2025-10-10T09:04:47 UTC (1760087087)
Oct 10 05:04:48 np0005479822 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 10 05:04:48 np0005479822 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 10 05:04:48 np0005479822 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 10 05:04:48 np0005479822 kernel: usbcore: registered new interface driver usbhid
Oct 10 05:04:48 np0005479822 kernel: usbhid: USB HID core driver
Oct 10 05:04:48 np0005479822 kernel: drop_monitor: Initializing network drop monitor service
Oct 10 05:04:48 np0005479822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 10 05:04:48 np0005479822 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 10 05:04:48 np0005479822 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 10 05:04:48 np0005479822 kernel: Initializing XFRM netlink socket
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_INET6 protocol family
Oct 10 05:04:48 np0005479822 kernel: Segment Routing with IPv6
Oct 10 05:04:48 np0005479822 kernel: NET: Registered PF_PACKET protocol family
Oct 10 05:04:48 np0005479822 kernel: mpls_gso: MPLS GSO support
Oct 10 05:04:48 np0005479822 kernel: IPI shorthand broadcast: enabled
Oct 10 05:04:48 np0005479822 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 10 05:04:48 np0005479822 kernel: AES CTR mode by8 optimization enabled
Oct 10 05:04:48 np0005479822 kernel: sched_clock: Marking stable (1306004627, 153321865)->(1540933990, -81607498)
Oct 10 05:04:48 np0005479822 kernel: registered taskstats version 1
Oct 10 05:04:48 np0005479822 kernel: Loading compiled-in X.509 certificates
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 10 05:04:48 np0005479822 kernel: Demotion targets for Node 0: null
Oct 10 05:04:48 np0005479822 kernel: page_owner is disabled
Oct 10 05:04:48 np0005479822 kernel: Key type .fscrypt registered
Oct 10 05:04:48 np0005479822 kernel: Key type fscrypt-provisioning registered
Oct 10 05:04:48 np0005479822 kernel: Key type big_key registered
Oct 10 05:04:48 np0005479822 kernel: Key type encrypted registered
Oct 10 05:04:48 np0005479822 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 10 05:04:48 np0005479822 kernel: Loading compiled-in module X.509 certificates
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 05:04:48 np0005479822 kernel: ima: Allocated hash algorithm: sha256
Oct 10 05:04:48 np0005479822 kernel: ima: No architecture policies found
Oct 10 05:04:48 np0005479822 kernel: evm: Initialising EVM extended attributes:
Oct 10 05:04:48 np0005479822 kernel: evm: security.selinux
Oct 10 05:04:48 np0005479822 kernel: evm: security.SMACK64 (disabled)
Oct 10 05:04:48 np0005479822 kernel: evm: security.SMACK64EXEC (disabled)
Oct 10 05:04:48 np0005479822 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 10 05:04:48 np0005479822 kernel: evm: security.SMACK64MMAP (disabled)
Oct 10 05:04:48 np0005479822 kernel: evm: security.apparmor (disabled)
Oct 10 05:04:48 np0005479822 kernel: evm: security.ima
Oct 10 05:04:48 np0005479822 kernel: evm: security.capability
Oct 10 05:04:48 np0005479822 kernel: evm: HMAC attrs: 0x1
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 10 05:04:48 np0005479822 kernel: Running certificate verification RSA selftest
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 10 05:04:48 np0005479822 kernel: Running certificate verification ECDSA selftest
Oct 10 05:04:48 np0005479822 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 10 05:04:48 np0005479822 kernel: clk: Disabling unused clocks
Oct 10 05:04:48 np0005479822 kernel: Freeing unused decrypted memory: 2028K
Oct 10 05:04:48 np0005479822 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 10 05:04:48 np0005479822 kernel: Write protecting the kernel read-only data: 30720k
Oct 10 05:04:48 np0005479822 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 10 05:04:48 np0005479822 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 10 05:04:48 np0005479822 kernel: Run /init as init process
Oct 10 05:04:48 np0005479822 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 05:04:48 np0005479822 systemd: Detected virtualization kvm.
Oct 10 05:04:48 np0005479822 systemd: Detected architecture x86-64.
Oct 10 05:04:48 np0005479822 systemd: Running in initrd.
Oct 10 05:04:48 np0005479822 systemd: No hostname configured, using default hostname.
Oct 10 05:04:48 np0005479822 systemd: Hostname set to <localhost>.
Oct 10 05:04:48 np0005479822 systemd: Initializing machine ID from VM UUID.
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: Manufacturer: QEMU
Oct 10 05:04:48 np0005479822 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 10 05:04:48 np0005479822 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 10 05:04:48 np0005479822 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 10 05:04:48 np0005479822 systemd: Queued start job for default target Initrd Default Target.
Oct 10 05:04:48 np0005479822 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 05:04:48 np0005479822 systemd: Reached target Local Encrypted Volumes.
Oct 10 05:04:48 np0005479822 systemd: Reached target Initrd /usr File System.
Oct 10 05:04:48 np0005479822 systemd: Reached target Local File Systems.
Oct 10 05:04:48 np0005479822 systemd: Reached target Path Units.
Oct 10 05:04:48 np0005479822 systemd: Reached target Slice Units.
Oct 10 05:04:48 np0005479822 systemd: Reached target Swaps.
Oct 10 05:04:48 np0005479822 systemd: Reached target Timer Units.
Oct 10 05:04:48 np0005479822 systemd: Listening on D-Bus System Message Bus Socket.
Oct 10 05:04:48 np0005479822 systemd: Listening on Journal Socket (/dev/log).
Oct 10 05:04:48 np0005479822 systemd: Listening on Journal Socket.
Oct 10 05:04:48 np0005479822 systemd: Listening on udev Control Socket.
Oct 10 05:04:48 np0005479822 systemd: Listening on udev Kernel Socket.
Oct 10 05:04:48 np0005479822 systemd: Reached target Socket Units.
Oct 10 05:04:48 np0005479822 systemd: Starting Create List of Static Device Nodes...
Oct 10 05:04:48 np0005479822 systemd: Starting Journal Service...
Oct 10 05:04:48 np0005479822 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 05:04:48 np0005479822 systemd: Starting Apply Kernel Variables...
Oct 10 05:04:48 np0005479822 systemd: Starting Create System Users...
Oct 10 05:04:48 np0005479822 systemd: Starting Setup Virtual Console...
Oct 10 05:04:48 np0005479822 systemd: Finished Create List of Static Device Nodes.
Oct 10 05:04:48 np0005479822 systemd: Finished Apply Kernel Variables.
Oct 10 05:04:48 np0005479822 systemd: Finished Create System Users.
Oct 10 05:04:48 np0005479822 systemd-journald[307]: Journal started
Oct 10 05:04:48 np0005479822 systemd-journald[307]: Runtime Journal (/run/log/journal/b3ce59718a214607a1ce4c5a00fcffdd) is 8.0M, max 153.6M, 145.6M free.
Oct 10 05:04:48 np0005479822 systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct 10 05:04:48 np0005479822 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct 10 05:04:48 np0005479822 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 10 05:04:48 np0005479822 systemd: Started Journal Service.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 05:04:48 np0005479822 systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 05:04:48 np0005479822 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 05:04:48 np0005479822 systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 05:04:48 np0005479822 systemd[1]: Finished Setup Virtual Console.
Oct 10 05:04:48 np0005479822 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting dracut cmdline hook...
Oct 10 05:04:48 np0005479822 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 10 05:04:48 np0005479822 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 05:04:48 np0005479822 systemd[1]: Finished dracut cmdline hook.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting dracut pre-udev hook...
Oct 10 05:04:48 np0005479822 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 10 05:04:48 np0005479822 kernel: device-mapper: uevent: version 1.0.3
Oct 10 05:04:48 np0005479822 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 10 05:04:48 np0005479822 kernel: RPC: Registered named UNIX socket transport module.
Oct 10 05:04:48 np0005479822 kernel: RPC: Registered udp transport module.
Oct 10 05:04:48 np0005479822 kernel: RPC: Registered tcp transport module.
Oct 10 05:04:48 np0005479822 kernel: RPC: Registered tcp-with-tls transport module.
Oct 10 05:04:48 np0005479822 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 10 05:04:48 np0005479822 rpc.statd[445]: Version 2.5.4 starting
Oct 10 05:04:48 np0005479822 rpc.statd[445]: Initializing NSM state
Oct 10 05:04:48 np0005479822 rpc.idmapd[450]: Setting log level to 0
Oct 10 05:04:48 np0005479822 systemd[1]: Finished dracut pre-udev hook.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 05:04:48 np0005479822 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 05:04:48 np0005479822 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting dracut pre-trigger hook...
Oct 10 05:04:48 np0005479822 systemd[1]: Finished dracut pre-trigger hook.
Oct 10 05:04:48 np0005479822 systemd[1]: Starting Coldplug All udev Devices...
Oct 10 05:04:49 np0005479822 systemd[1]: Created slice Slice /system/modprobe.
Oct 10 05:04:49 np0005479822 systemd[1]: Starting Load Kernel Module configfs...
Oct 10 05:04:49 np0005479822 systemd[1]: Finished Coldplug All udev Devices.
Oct 10 05:04:49 np0005479822 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 05:04:49 np0005479822 systemd[1]: Finished Load Kernel Module configfs.
Oct 10 05:04:49 np0005479822 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Network.
Oct 10 05:04:49 np0005479822 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 05:04:49 np0005479822 systemd[1]: Starting dracut initqueue hook...
Oct 10 05:04:49 np0005479822 systemd[1]: Mounting Kernel Configuration File System...
Oct 10 05:04:49 np0005479822 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 10 05:04:49 np0005479822 systemd[1]: Mounted Kernel Configuration File System.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target System Initialization.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Basic System.
Oct 10 05:04:49 np0005479822 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 10 05:04:49 np0005479822 kernel: vda: vda1
Oct 10 05:04:49 np0005479822 kernel: scsi host0: ata_piix
Oct 10 05:04:49 np0005479822 kernel: scsi host1: ata_piix
Oct 10 05:04:49 np0005479822 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 10 05:04:49 np0005479822 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 10 05:04:49 np0005479822 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 05:04:49 np0005479822 systemd-udevd[468]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Initrd Root Device.
Oct 10 05:04:49 np0005479822 kernel: ata1: found unknown device (class 0)
Oct 10 05:04:49 np0005479822 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 10 05:04:49 np0005479822 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 10 05:04:49 np0005479822 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 10 05:04:49 np0005479822 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 10 05:04:49 np0005479822 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 10 05:04:49 np0005479822 systemd[1]: Finished dracut initqueue hook.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 10 05:04:49 np0005479822 systemd[1]: Reached target Remote File Systems.
Oct 10 05:04:49 np0005479822 systemd[1]: Starting dracut pre-mount hook...
Oct 10 05:04:49 np0005479822 systemd[1]: Finished dracut pre-mount hook.
Oct 10 05:04:49 np0005479822 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 10 05:04:49 np0005479822 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Oct 10 05:04:49 np0005479822 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 05:04:49 np0005479822 systemd[1]: Mounting /sysroot...
Oct 10 05:04:50 np0005479822 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 10 05:04:50 np0005479822 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 10 05:04:50 np0005479822 kernel: XFS (vda1): Ending clean mount
Oct 10 05:04:50 np0005479822 systemd[1]: Mounted /sysroot.
Oct 10 05:04:50 np0005479822 systemd[1]: Reached target Initrd Root File System.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 10 05:04:50 np0005479822 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 10 05:04:50 np0005479822 systemd[1]: Reached target Initrd File Systems.
Oct 10 05:04:50 np0005479822 systemd[1]: Reached target Initrd Default Target.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting dracut mount hook...
Oct 10 05:04:50 np0005479822 systemd[1]: Finished dracut mount hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 10 05:04:50 np0005479822 rpc.idmapd[450]: exiting on signal 15
Oct 10 05:04:50 np0005479822 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Network.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Timer Units.
Oct 10 05:04:50 np0005479822 systemd[1]: dbus.socket: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Initrd Default Target.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Basic System.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Initrd Root Device.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Initrd /usr File System.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Path Units.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Remote File Systems.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Slice Units.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Socket Units.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target System Initialization.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Local File Systems.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Swaps.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut mount hook.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut pre-mount hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut initqueue hook.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Coldplug All udev Devices.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut pre-trigger hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Setup Virtual Console.
Oct 10 05:04:50 np0005479822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Closed udev Control Socket.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Closed udev Kernel Socket.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut pre-udev hook.
Oct 10 05:04:50 np0005479822 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped dracut cmdline hook.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting Cleanup udev Database...
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 10 05:04:50 np0005479822 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 10 05:04:50 np0005479822 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Stopped Create System Users.
Oct 10 05:04:50 np0005479822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 10 05:04:50 np0005479822 systemd[1]: Finished Cleanup udev Database.
Oct 10 05:04:50 np0005479822 systemd[1]: Reached target Switch Root.
Oct 10 05:04:50 np0005479822 systemd[1]: Starting Switch Root...
Oct 10 05:04:50 np0005479822 systemd[1]: Switching root.
Oct 10 05:04:50 np0005479822 systemd-journald[307]: Journal stopped
Oct 10 05:04:51 np0005479822 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct 10 05:04:51 np0005479822 kernel: audit: type=1404 audit(1760087090.673:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:04:51 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:04:51 np0005479822 kernel: audit: type=1403 audit(1760087090.842:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 10 05:04:51 np0005479822 systemd: Successfully loaded SELinux policy in 175.716ms.
Oct 10 05:04:51 np0005479822 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.036ms.
Oct 10 05:04:51 np0005479822 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 05:04:51 np0005479822 systemd: Detected virtualization kvm.
Oct 10 05:04:51 np0005479822 systemd: Detected architecture x86-64.
Oct 10 05:04:51 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:04:51 np0005479822 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd: Stopped Switch Root.
Oct 10 05:04:51 np0005479822 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 10 05:04:51 np0005479822 systemd: Created slice Slice /system/getty.
Oct 10 05:04:51 np0005479822 systemd: Created slice Slice /system/serial-getty.
Oct 10 05:04:51 np0005479822 systemd: Created slice Slice /system/sshd-keygen.
Oct 10 05:04:51 np0005479822 systemd: Created slice User and Session Slice.
Oct 10 05:04:51 np0005479822 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 05:04:51 np0005479822 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 10 05:04:51 np0005479822 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 10 05:04:51 np0005479822 systemd: Reached target Local Encrypted Volumes.
Oct 10 05:04:51 np0005479822 systemd: Stopped target Switch Root.
Oct 10 05:04:51 np0005479822 systemd: Stopped target Initrd File Systems.
Oct 10 05:04:51 np0005479822 systemd: Stopped target Initrd Root File System.
Oct 10 05:04:51 np0005479822 systemd: Reached target Local Integrity Protected Volumes.
Oct 10 05:04:51 np0005479822 systemd: Reached target Path Units.
Oct 10 05:04:51 np0005479822 systemd: Reached target rpc_pipefs.target.
Oct 10 05:04:51 np0005479822 systemd: Reached target Slice Units.
Oct 10 05:04:51 np0005479822 systemd: Reached target Swaps.
Oct 10 05:04:51 np0005479822 systemd: Reached target Local Verity Protected Volumes.
Oct 10 05:04:51 np0005479822 systemd: Listening on RPCbind Server Activation Socket.
Oct 10 05:04:51 np0005479822 systemd: Reached target RPC Port Mapper.
Oct 10 05:04:51 np0005479822 systemd: Listening on Process Core Dump Socket.
Oct 10 05:04:51 np0005479822 systemd: Listening on initctl Compatibility Named Pipe.
Oct 10 05:04:51 np0005479822 systemd: Listening on udev Control Socket.
Oct 10 05:04:51 np0005479822 systemd: Listening on udev Kernel Socket.
Oct 10 05:04:51 np0005479822 systemd: Mounting Huge Pages File System...
Oct 10 05:04:51 np0005479822 systemd: Mounting POSIX Message Queue File System...
Oct 10 05:04:51 np0005479822 systemd: Mounting Kernel Debug File System...
Oct 10 05:04:51 np0005479822 systemd: Mounting Kernel Trace File System...
Oct 10 05:04:51 np0005479822 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 05:04:51 np0005479822 systemd: Starting Create List of Static Device Nodes...
Oct 10 05:04:51 np0005479822 systemd: Starting Load Kernel Module configfs...
Oct 10 05:04:51 np0005479822 systemd: Starting Load Kernel Module drm...
Oct 10 05:04:51 np0005479822 systemd: Starting Load Kernel Module efi_pstore...
Oct 10 05:04:51 np0005479822 systemd: Starting Load Kernel Module fuse...
Oct 10 05:04:51 np0005479822 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 10 05:04:51 np0005479822 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd: Stopped File System Check on Root Device.
Oct 10 05:04:51 np0005479822 systemd: Stopped Journal Service.
Oct 10 05:04:51 np0005479822 systemd: Starting Journal Service...
Oct 10 05:04:51 np0005479822 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 05:04:51 np0005479822 systemd: Starting Generate network units from Kernel command line...
Oct 10 05:04:51 np0005479822 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 05:04:51 np0005479822 systemd: Starting Remount Root and Kernel File Systems...
Oct 10 05:04:51 np0005479822 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 10 05:04:51 np0005479822 systemd: Starting Apply Kernel Variables...
Oct 10 05:04:51 np0005479822 systemd: Starting Coldplug All udev Devices...
Oct 10 05:04:51 np0005479822 systemd: Mounted Huge Pages File System.
Oct 10 05:04:51 np0005479822 systemd: Mounted POSIX Message Queue File System.
Oct 10 05:04:51 np0005479822 systemd-journald[678]: Journal started
Oct 10 05:04:51 np0005479822 systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 05:04:51 np0005479822 systemd[1]: Queued start job for default target Multi-User System.
Oct 10 05:04:51 np0005479822 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd: Started Journal Service.
Oct 10 05:04:51 np0005479822 systemd[1]: Mounted Kernel Debug File System.
Oct 10 05:04:51 np0005479822 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 10 05:04:51 np0005479822 systemd[1]: Mounted Kernel Trace File System.
Oct 10 05:04:51 np0005479822 kernel: ACPI: bus type drm_connector registered
Oct 10 05:04:51 np0005479822 kernel: fuse: init (API version 7.37)
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Create List of Static Device Nodes.
Oct 10 05:04:51 np0005479822 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Load Kernel Module configfs.
Oct 10 05:04:51 np0005479822 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Load Kernel Module drm.
Oct 10 05:04:51 np0005479822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 10 05:04:51 np0005479822 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Load Kernel Module fuse.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Generate network units from Kernel command line.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Apply Kernel Variables.
Oct 10 05:04:51 np0005479822 systemd[1]: Mounting FUSE Control File System...
Oct 10 05:04:51 np0005479822 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Rebuild Hardware Database...
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 10 05:04:51 np0005479822 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Load/Save OS Random Seed...
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Create System Users...
Oct 10 05:04:51 np0005479822 systemd[1]: Mounted FUSE Control File System.
Oct 10 05:04:51 np0005479822 systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 05:04:51 np0005479822 systemd-journald[678]: Received client request to flush runtime journal.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Load/Save OS Random Seed.
Oct 10 05:04:51 np0005479822 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Coldplug All udev Devices.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Create System Users.
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 05:04:51 np0005479822 systemd[1]: Reached target Preparation for Local File Systems.
Oct 10 05:04:51 np0005479822 systemd[1]: Reached target Local File Systems.
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 10 05:04:51 np0005479822 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 10 05:04:51 np0005479822 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 10 05:04:51 np0005479822 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Automatic Boot Loader Update...
Oct 10 05:04:51 np0005479822 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 05:04:51 np0005479822 bootctl[696]: Couldn't find EFI system partition, skipping.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Automatic Boot Loader Update.
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Security Auditing Service...
Oct 10 05:04:51 np0005479822 systemd[1]: Starting RPC Bind...
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Rebuild Journal Catalog...
Oct 10 05:04:51 np0005479822 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 10 05:04:51 np0005479822 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Rebuild Journal Catalog.
Oct 10 05:04:51 np0005479822 systemd[1]: Started RPC Bind.
Oct 10 05:04:51 np0005479822 augenrules[707]: /sbin/augenrules: No change
Oct 10 05:04:51 np0005479822 augenrules[722]: No rules
Oct 10 05:04:51 np0005479822 augenrules[722]: enabled 1
Oct 10 05:04:51 np0005479822 augenrules[722]: failure 1
Oct 10 05:04:51 np0005479822 augenrules[722]: pid 702
Oct 10 05:04:51 np0005479822 augenrules[722]: rate_limit 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_limit 8192
Oct 10 05:04:51 np0005479822 augenrules[722]: lost 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog 3
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time 60000
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time_actual 0
Oct 10 05:04:51 np0005479822 augenrules[722]: enabled 1
Oct 10 05:04:51 np0005479822 augenrules[722]: failure 1
Oct 10 05:04:51 np0005479822 augenrules[722]: pid 702
Oct 10 05:04:51 np0005479822 augenrules[722]: rate_limit 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_limit 8192
Oct 10 05:04:51 np0005479822 augenrules[722]: lost 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time 60000
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time_actual 0
Oct 10 05:04:51 np0005479822 augenrules[722]: enabled 1
Oct 10 05:04:51 np0005479822 augenrules[722]: failure 1
Oct 10 05:04:51 np0005479822 augenrules[722]: pid 702
Oct 10 05:04:51 np0005479822 augenrules[722]: rate_limit 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_limit 8192
Oct 10 05:04:51 np0005479822 augenrules[722]: lost 0
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog 4
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time 60000
Oct 10 05:04:51 np0005479822 augenrules[722]: backlog_wait_time_actual 0
Oct 10 05:04:51 np0005479822 systemd[1]: Started Security Auditing Service.
Oct 10 05:04:51 np0005479822 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 10 05:04:51 np0005479822 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 10 05:04:52 np0005479822 systemd[1]: Finished Rebuild Hardware Database.
Oct 10 05:04:52 np0005479822 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 05:04:52 np0005479822 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 10 05:04:52 np0005479822 systemd[1]: Starting Update is Completed...
Oct 10 05:04:52 np0005479822 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 05:04:52 np0005479822 systemd[1]: Finished Update is Completed.
Oct 10 05:04:52 np0005479822 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target System Initialization.
Oct 10 05:04:52 np0005479822 systemd[1]: Started dnf makecache --timer.
Oct 10 05:04:52 np0005479822 systemd[1]: Started Daily rotation of log files.
Oct 10 05:04:52 np0005479822 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target Timer Units.
Oct 10 05:04:52 np0005479822 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 05:04:52 np0005479822 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target Socket Units.
Oct 10 05:04:52 np0005479822 systemd[1]: Starting D-Bus System Message Bus...
Oct 10 05:04:52 np0005479822 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 05:04:52 np0005479822 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 10 05:04:52 np0005479822 systemd[1]: Starting Load Kernel Module configfs...
Oct 10 05:04:52 np0005479822 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 05:04:52 np0005479822 systemd[1]: Finished Load Kernel Module configfs.
Oct 10 05:04:52 np0005479822 systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:04:52 np0005479822 systemd[1]: Started D-Bus System Message Bus.
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target Basic System.
Oct 10 05:04:52 np0005479822 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 10 05:04:52 np0005479822 dbus-broker-lau[759]: Ready
Oct 10 05:04:52 np0005479822 systemd[1]: Starting NTP client/server...
Oct 10 05:04:52 np0005479822 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 10 05:04:52 np0005479822 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 10 05:04:52 np0005479822 systemd[1]: Starting IPv4 firewall with iptables...
Oct 10 05:04:52 np0005479822 systemd[1]: Started irqbalance daemon.
Oct 10 05:04:52 np0005479822 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 10 05:04:52 np0005479822 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:04:52 np0005479822 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:04:52 np0005479822 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target sshd-keygen.target.
Oct 10 05:04:52 np0005479822 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 10 05:04:52 np0005479822 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 10 05:04:52 np0005479822 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 10 05:04:52 np0005479822 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 10 05:04:52 np0005479822 systemd[1]: Reached target User and Group Name Lookups.
Oct 10 05:04:52 np0005479822 chronyd[795]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 05:04:52 np0005479822 chronyd[795]: Loaded 0 symmetric keys
Oct 10 05:04:52 np0005479822 chronyd[795]: Using right/UTC timezone to obtain leap second data
Oct 10 05:04:52 np0005479822 chronyd[795]: Loaded seccomp filter (level 2)
Oct 10 05:04:52 np0005479822 systemd[1]: Starting User Login Management...
Oct 10 05:04:52 np0005479822 systemd[1]: Started NTP client/server.
Oct 10 05:04:52 np0005479822 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 10 05:04:52 np0005479822 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 10 05:04:52 np0005479822 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 10 05:04:52 np0005479822 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 10 05:04:52 np0005479822 kernel: Console: switching to colour dummy device 80x25
Oct 10 05:04:52 np0005479822 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 10 05:04:52 np0005479822 kernel: [drm] features: -context_init
Oct 10 05:04:52 np0005479822 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 10 05:04:52 np0005479822 kernel: [drm] number of scanouts: 1
Oct 10 05:04:52 np0005479822 kernel: [drm] number of cap sets: 0
Oct 10 05:04:52 np0005479822 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 10 05:04:52 np0005479822 systemd-logind[789]: New seat seat0.
Oct 10 05:04:52 np0005479822 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 05:04:52 np0005479822 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 05:04:52 np0005479822 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 10 05:04:52 np0005479822 kernel: Console: switching to colour frame buffer device 128x48
Oct 10 05:04:52 np0005479822 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 10 05:04:52 np0005479822 systemd[1]: Started User Login Management.
Oct 10 05:04:52 np0005479822 kernel: kvm_amd: TSC scaling supported
Oct 10 05:04:52 np0005479822 kernel: kvm_amd: Nested Virtualization enabled
Oct 10 05:04:52 np0005479822 kernel: kvm_amd: Nested Paging enabled
Oct 10 05:04:52 np0005479822 kernel: kvm_amd: LBR virtualization supported
Oct 10 05:04:52 np0005479822 iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Oct 10 05:04:52 np0005479822 systemd[1]: Finished IPv4 firewall with iptables.
Oct 10 05:04:53 np0005479822 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 10 Oct 2025 09:04:53 +0000. Up 6.79 seconds.
Oct 10 05:04:53 np0005479822 systemd[1]: run-cloud\x2dinit-tmp-tmpx0gvtun6.mount: Deactivated successfully.
Oct 10 05:04:53 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 05:04:53 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 05:04:53 np0005479822 systemd-hostnamed[853]: Hostname set to <np0005479822.novalocal> (static)
Oct 10 05:04:53 np0005479822 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 10 05:04:53 np0005479822 systemd[1]: Reached target Preparation for Network.
Oct 10 05:04:53 np0005479822 systemd[1]: Starting Network Manager...
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.6836] NetworkManager (version 1.54.1-1.el9) is starting... (boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.6841] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7013] manager[0x5637f32b3080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7092] hostname: hostname: using hostnamed
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7092] hostname: static hostname changed from (none) to "np0005479822.novalocal"
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7096] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7250] manager[0x5637f32b3080]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7251] manager[0x5637f32b3080]: rfkill: WWAN hardware radio set enabled
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7337] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7338] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7338] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7339] manager: Networking is enabled by state file
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7341] settings: Loaded settings plugin: keyfile (internal)
Oct 10 05:04:53 np0005479822 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7406] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7429] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7450] dhcp: init: Using DHCP client 'internal'
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7453] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7464] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7475] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7481] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7489] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7491] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7516] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7519] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7520] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7522] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7524] device (eth0): carrier: link connected
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7526] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7531] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7536] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7539] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7539] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7541] manager: NetworkManager state is now CONNECTING
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7542] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7547] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7550] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7593] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7599] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7614] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:04:53 np0005479822 systemd[1]: Started Network Manager.
Oct 10 05:04:53 np0005479822 systemd[1]: Reached target Network.
Oct 10 05:04:53 np0005479822 systemd[1]: Starting Network Manager Wait Online...
Oct 10 05:04:53 np0005479822 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 10 05:04:53 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7876] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7880] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7882] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7888] device (lo): Activation: successful, device activated.
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7895] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7898] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7901] device (eth0): Activation: successful, device activated.
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7906] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 05:04:53 np0005479822 NetworkManager[857]: <info>  [1760087093.7909] manager: startup complete
Oct 10 05:04:53 np0005479822 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 10 05:04:53 np0005479822 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 05:04:53 np0005479822 systemd[1]: Reached target NFS client services.
Oct 10 05:04:53 np0005479822 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 05:04:53 np0005479822 systemd[1]: Reached target Remote File Systems.
Oct 10 05:04:53 np0005479822 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 05:04:53 np0005479822 systemd[1]: Finished Network Manager Wait Online.
Oct 10 05:04:53 np0005479822 systemd[1]: Starting Cloud-init: Network Stage...
Oct 10 05:04:54 np0005479822 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 10 Oct 2025 09:04:54 +0000. Up 7.88 seconds.
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.20        | 255.255.255.0 | global | fa:16:3e:c5:0d:38 |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fec5:d38/64 |       .       |  link  | fa:16:3e:c5:0d:38 |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 10 05:04:54 np0005479822 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 05:04:55 np0005479822 cloud-init[922]: Generating public/private rsa key pair.
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key fingerprint is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: SHA256:g3CHQSjxteelLmbilWjb94SN/JociuetcndYLvv3Iwc root@np0005479822.novalocal
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key's randomart image is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: +---[RSA 3072]----+
Oct 10 05:04:55 np0005479822 cloud-init[922]: |  .. o+          |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |  .... +         |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |   .o + o .      |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |     o = o       |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |      . S        |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |     . + =. E    |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |    + * *+o  .   |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |   o.Oo===o o o  |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |    +==+*B+. +.. |
Oct 10 05:04:55 np0005479822 cloud-init[922]: +----[SHA256]-----+
Oct 10 05:04:55 np0005479822 cloud-init[922]: Generating public/private ecdsa key pair.
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key fingerprint is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: SHA256:tCnXzgijmRuBp7Wqe0xuaPNgbnoKDvhNpO1wHQ5UY64 root@np0005479822.novalocal
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key's randomart image is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: +---[ECDSA 256]---+
Oct 10 05:04:55 np0005479822 cloud-init[922]: |      +          |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |     + .         |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |    . . .        |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |   o . . +       |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |  . E = S .      |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |. .B X * +       |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |+B+ X o . o      |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |B=BB o           |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |OX=.+            |
Oct 10 05:04:55 np0005479822 cloud-init[922]: +----[SHA256]-----+
Oct 10 05:04:55 np0005479822 cloud-init[922]: Generating public/private ed25519 key pair.
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 10 05:04:55 np0005479822 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key fingerprint is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: SHA256:olpr5gMCN9xU4nDpDbOa35Uc2XGLsvtuw3v6LGO0SRY root@np0005479822.novalocal
Oct 10 05:04:55 np0005479822 cloud-init[922]: The key's randomart image is:
Oct 10 05:04:55 np0005479822 cloud-init[922]: +--[ED25519 256]--+
Oct 10 05:04:55 np0005479822 cloud-init[922]: |  . oo.          |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |   +=.    . .    |
Oct 10 05:04:55 np0005479822 cloud-init[922]: | . +.=   o + .   |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |. + + . + E .    |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |.. +  ..S= .     |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |. +  . .= +      |
Oct 10 05:04:55 np0005479822 cloud-init[922]: | . oo. . * o     |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |   o=.. . X..    |
Oct 10 05:04:55 np0005479822 cloud-init[922]: |  .+o.   ==Oo    |
Oct 10 05:04:55 np0005479822 cloud-init[922]: +----[SHA256]-----+
Oct 10 05:04:55 np0005479822 systemd[1]: Finished Cloud-init: Network Stage.
Oct 10 05:04:55 np0005479822 systemd[1]: Reached target Cloud-config availability.
Oct 10 05:04:55 np0005479822 systemd[1]: Reached target Network is Online.
Oct 10 05:04:55 np0005479822 systemd[1]: Starting Cloud-init: Config Stage...
Oct 10 05:04:55 np0005479822 systemd[1]: Starting Notify NFS peers of a restart...
Oct 10 05:04:55 np0005479822 systemd[1]: Starting System Logging Service...
Oct 10 05:04:55 np0005479822 sm-notify[1004]: Version 2.5.4 starting
Oct 10 05:04:55 np0005479822 systemd[1]: Starting OpenSSH server daemon...
Oct 10 05:04:55 np0005479822 systemd[1]: Starting Permit User Sessions...
Oct 10 05:04:55 np0005479822 systemd[1]: Started Notify NFS peers of a restart.
Oct 10 05:04:55 np0005479822 systemd[1]: Started OpenSSH server daemon.
Oct 10 05:04:55 np0005479822 systemd[1]: Finished Permit User Sessions.
Oct 10 05:04:55 np0005479822 systemd[1]: Started Command Scheduler.
Oct 10 05:04:55 np0005479822 systemd[1]: Started Getty on tty1.
Oct 10 05:04:55 np0005479822 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Oct 10 05:04:55 np0005479822 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 10 05:04:55 np0005479822 systemd[1]: Started Serial Getty on ttyS0.
Oct 10 05:04:55 np0005479822 systemd[1]: Reached target Login Prompts.
Oct 10 05:04:55 np0005479822 systemd[1]: Started System Logging Service.
Oct 10 05:04:55 np0005479822 systemd[1]: Reached target Multi-User System.
Oct 10 05:04:55 np0005479822 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 10 05:04:55 np0005479822 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 10 05:04:55 np0005479822 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 10 05:04:55 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:04:55 np0005479822 cloud-init[1018]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 10 Oct 2025 09:04:55 +0000. Up 9.51 seconds.
Oct 10 05:04:55 np0005479822 systemd[1]: Finished Cloud-init: Config Stage.
Oct 10 05:04:55 np0005479822 systemd[1]: Starting Cloud-init: Final Stage...
Oct 10 05:04:56 np0005479822 cloud-init[1040]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 10 Oct 2025 09:04:56 +0000. Up 9.91 seconds.
Oct 10 05:04:56 np0005479822 cloud-init[1042]: #############################################################
Oct 10 05:04:56 np0005479822 cloud-init[1043]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 10 05:04:56 np0005479822 cloud-init[1045]: 256 SHA256:tCnXzgijmRuBp7Wqe0xuaPNgbnoKDvhNpO1wHQ5UY64 root@np0005479822.novalocal (ECDSA)
Oct 10 05:04:56 np0005479822 cloud-init[1047]: 256 SHA256:olpr5gMCN9xU4nDpDbOa35Uc2XGLsvtuw3v6LGO0SRY root@np0005479822.novalocal (ED25519)
Oct 10 05:04:56 np0005479822 cloud-init[1049]: 3072 SHA256:g3CHQSjxteelLmbilWjb94SN/JociuetcndYLvv3Iwc root@np0005479822.novalocal (RSA)
Oct 10 05:04:56 np0005479822 cloud-init[1050]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 10 05:04:56 np0005479822 cloud-init[1051]: #############################################################
Oct 10 05:04:56 np0005479822 cloud-init[1040]: Cloud-init v. 24.4-7.el9 finished at Fri, 10 Oct 2025 09:04:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.09 seconds
Oct 10 05:04:56 np0005479822 systemd[1]: Finished Cloud-init: Final Stage.
Oct 10 05:04:56 np0005479822 systemd[1]: Reached target Cloud-init target.
Oct 10 05:04:56 np0005479822 systemd[1]: Startup finished in 1.685s (kernel) + 2.738s (initrd) + 5.731s (userspace) = 10.155s.
Oct 10 05:04:58 np0005479822 chronyd[795]: Selected source 45.61.49.156 (2.centos.pool.ntp.org)
Oct 10 05:04:58 np0005479822 chronyd[795]: System clock TAI offset set to 37 seconds
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 25 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 31 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 28 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 32 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 30 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 10 05:05:03 np0005479822 irqbalance[780]: IRQ 29 affinity is now unmanaged
Oct 10 05:05:03 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:05:19 np0005479822 systemd[1]: Created slice User Slice of UID 1000.
Oct 10 05:05:19 np0005479822 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 10 05:05:19 np0005479822 systemd-logind[789]: New session 1 of user zuul.
Oct 10 05:05:19 np0005479822 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 10 05:05:19 np0005479822 systemd[1]: Starting User Manager for UID 1000...
Oct 10 05:05:19 np0005479822 systemd[1059]: Queued start job for default target Main User Target.
Oct 10 05:05:19 np0005479822 systemd[1059]: Created slice User Application Slice.
Oct 10 05:05:19 np0005479822 systemd[1059]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 05:05:19 np0005479822 systemd[1059]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 05:05:19 np0005479822 systemd[1059]: Reached target Paths.
Oct 10 05:05:19 np0005479822 systemd[1059]: Reached target Timers.
Oct 10 05:05:19 np0005479822 systemd[1059]: Starting D-Bus User Message Bus Socket...
Oct 10 05:05:19 np0005479822 systemd[1059]: Starting Create User's Volatile Files and Directories...
Oct 10 05:05:19 np0005479822 systemd[1059]: Finished Create User's Volatile Files and Directories.
Oct 10 05:05:19 np0005479822 systemd[1059]: Listening on D-Bus User Message Bus Socket.
Oct 10 05:05:19 np0005479822 systemd[1059]: Reached target Sockets.
Oct 10 05:05:19 np0005479822 systemd[1059]: Reached target Basic System.
Oct 10 05:05:19 np0005479822 systemd[1059]: Reached target Main User Target.
Oct 10 05:05:19 np0005479822 systemd[1059]: Startup finished in 151ms.
Oct 10 05:05:19 np0005479822 systemd[1]: Started User Manager for UID 1000.
Oct 10 05:05:19 np0005479822 systemd[1]: Started Session 1 of User zuul.
Oct 10 05:05:20 np0005479822 python3[1142]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:05:23 np0005479822 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 05:05:23 np0005479822 python3[1170]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:05:32 np0005479822 python3[1230]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:05:33 np0005479822 python3[1270]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 10 05:05:35 np0005479822 python3[1296]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDEBkxJ4sw2+DK3cAbafLjRenK6XkRzPrF3EgUC0Qy/9kZ0kuErGkKyCEXRNE93NnKaUfoU9ebcJtP/W0B6xem+P337Yb5eT1d5d0DPlSyJ224O/rNncfiIo6YcMhrWXlb8yWwfHogZqjmOgJoH57cdsVMt26tUmFXzrJ1qEBloCvfoEe/tx8o3aeflIhUQ0zm2bbmhRn09oGRCODyyr02YoJZm5GbMiTb7mz8xvM31PEo8DzS5ti1YMOUi76ojLKIS6hZkIk4sUuSXmOwBoYhmyGjvs8csl/rxfVJq3bV+DFnatOKlFCyjgY0Ed4oCeReEGI6h29najM/8mUzfOeBj0dyWj3N3oOwlewtF5ifTB4JPwfEN1Rx37wbEzN/2Q7MOKzeWDxP2E0trD5ey9oqWFCpRpuJURMiPr+A6h070uR8U8vUNxGtH3vAmkuN+p3w79WF1wzlCmcoC+oSdwETcoOqkD84qkNgYJpVVpboSnwBo/H/aPJuJhs/nYPhz+c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:35 np0005479822 python3[1320]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:36 np0005479822 python3[1419]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:05:36 np0005479822 python3[1490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087136.2035913-252-253170561469944/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa follow=False checksum=6477c55dd7b29e382b0ff49c34043ebcd2bcc305 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:37 np0005479822 python3[1613]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:05:37 np0005479822 python3[1684]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087137.1911607-307-264647230127252/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa.pub follow=False checksum=8b86d6c8317b3a249fa7c3a90607af8e51a186ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:39 np0005479822 python3[1732]: ansible-ping Invoked with data=pong
Oct 10 05:05:40 np0005479822 python3[1756]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:05:42 np0005479822 python3[1814]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 10 05:05:43 np0005479822 python3[1846]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:44 np0005479822 python3[1870]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:44 np0005479822 python3[1894]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:44 np0005479822 python3[1918]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:44 np0005479822 python3[1942]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:45 np0005479822 python3[1966]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:46 np0005479822 python3[1992]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:47 np0005479822 python3[2070]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:05:48 np0005479822 python3[2143]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087147.2338092-33-274231801842306/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:48 np0005479822 python3[2191]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:49 np0005479822 python3[2215]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:49 np0005479822 python3[2239]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:49 np0005479822 python3[2263]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:49 np0005479822 python3[2287]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:50 np0005479822 python3[2311]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:50 np0005479822 python3[2335]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:50 np0005479822 python3[2359]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:51 np0005479822 python3[2383]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:51 np0005479822 python3[2407]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:51 np0005479822 python3[2431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:52 np0005479822 python3[2455]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:52 np0005479822 python3[2479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:52 np0005479822 python3[2503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:52 np0005479822 python3[2527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:53 np0005479822 python3[2551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:53 np0005479822 python3[2575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:53 np0005479822 python3[2599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:54 np0005479822 python3[2623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:54 np0005479822 python3[2647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:54 np0005479822 python3[2671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:55 np0005479822 python3[2695]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:55 np0005479822 python3[2719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:55 np0005479822 python3[2743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:55 np0005479822 python3[2767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:56 np0005479822 python3[2791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:05:58 np0005479822 python3[2817]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 05:05:58 np0005479822 systemd[1]: Starting Time & Date Service...
Oct 10 05:05:58 np0005479822 systemd[1]: Started Time & Date Service.
Oct 10 05:05:58 np0005479822 systemd-timedated[2819]: Changed time zone to 'UTC' (UTC).
Oct 10 05:05:58 np0005479822 python3[2848]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:05:59 np0005479822 python3[2924]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:05:59 np0005479822 python3[2995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760087159.1770568-252-19086082499537/source _original_basename=tmpbyh7uw4_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:00 np0005479822 python3[3095]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:06:00 np0005479822 python3[3166]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087160.0496268-303-249221800546970/source _original_basename=tmpjq1p3cs3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:01 np0005479822 python3[3268]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:06:02 np0005479822 python3[3341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087161.3937972-382-8132268083242/source _original_basename=tmp9xext10y follow=False checksum=6cbe59410b7de8cef4e7b572834f646539a41bfa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:02 np0005479822 python3[3389]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:06:02 np0005479822 python3[3415]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:06:03 np0005479822 irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 10 05:06:03 np0005479822 irqbalance[780]: IRQ 26 affinity is now unmanaged
Oct 10 05:06:03 np0005479822 python3[3495]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:06:03 np0005479822 python3[3568]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087163.2094662-453-88093426221127/source _original_basename=tmpysg1iv2o follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:04 np0005479822 python3[3619]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-80e1-2ccb-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:06:05 np0005479822 python3[3647]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-80e1-2ccb-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 10 05:06:06 np0005479822 python3[3675]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:23 np0005479822 python3[3701]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:06:28 np0005479822 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 05:07:23 np0005479822 systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 10 05:07:27 np0005479822 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 10 05:07:27 np0005479822 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7518] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 05:07:27 np0005479822 systemd-udevd[3704]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7720] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7745] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7749] device (eth1): carrier: link connected
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7750] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7755] policy: auto-activating connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7759] device (eth1): Activation: starting connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7759] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7762] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:07:27 np0005479822 NetworkManager[857]: <info>  [1760087247.7769] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:07:27 np0005479822 systemd[1059]: Starting Mark boot as successful...
Oct 10 05:07:27 np0005479822 systemd[1059]: Finished Mark boot as successful.
Oct 10 05:07:28 np0005479822 systemd-logind[789]: New session 3 of user zuul.
Oct 10 05:07:28 np0005479822 systemd[1]: Started Session 3 of User zuul.
Oct 10 05:07:28 np0005479822 python3[3736]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-dbf0-3472-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:07:39 np0005479822 python3[3816]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:07:39 np0005479822 python3[3889]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087258.6499834-155-60949084712063/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=443c8cb365d54d2c1d375a8deb27e5080d25cfce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:07:40 np0005479822 python3[3939]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:07:40 np0005479822 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 05:07:40 np0005479822 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 05:07:40 np0005479822 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 05:07:40 np0005479822 systemd[1]: Stopping Network Manager...
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1080] caught SIGTERM, shutting down normally.
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1100] dhcp4 (eth0): canceled DHCP transaction
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1101] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1101] dhcp4 (eth0): state changed no lease
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1105] manager: NetworkManager state is now CONNECTING
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1164] dhcp4 (eth1): canceled DHCP transaction
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1164] dhcp4 (eth1): state changed no lease
Oct 10 05:07:40 np0005479822 NetworkManager[857]: <info>  [1760087260.1225] exiting (success)
Oct 10 05:07:40 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:07:40 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:07:40 np0005479822 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 05:07:40 np0005479822 systemd[1]: Stopped Network Manager.
Oct 10 05:07:40 np0005479822 systemd[1]: NetworkManager.service: Consumed 1.122s CPU time, 10.2M memory peak.
Oct 10 05:07:40 np0005479822 systemd[1]: Starting Network Manager...
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.1920] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.1924] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.1991] manager[0x5634a5673070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 05:07:40 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 05:07:40 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2848] hostname: hostname: using hostnamed
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2849] hostname: static hostname changed from (none) to "np0005479822.novalocal"
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2856] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2864] manager[0x5634a5673070]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2864] manager[0x5634a5673070]: rfkill: WWAN hardware radio set enabled
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2901] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2901] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2902] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2902] manager: Networking is enabled by state file
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2905] settings: Loaded settings plugin: keyfile (internal)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2910] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2938] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2953] dhcp: init: Using DHCP client 'internal'
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2956] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2963] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2970] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2979] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2987] device (eth0): carrier: link connected
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2992] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2998] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.2998] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3007] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3017] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3027] device (eth1): carrier: link connected
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3032] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3038] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b) (indicated)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3039] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3046] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3056] device (eth1): Activation: starting connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 05:07:40 np0005479822 systemd[1]: Started Network Manager.
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3067] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3075] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3077] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3080] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3084] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3087] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3093] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3104] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3111] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3114] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3129] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3136] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3158] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3166] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3173] device (lo): Activation: successful, device activated.
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3183] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 05:07:40 np0005479822 systemd[1]: Starting Network Manager Wait Online...
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3189] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3265] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3298] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3300] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3304] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3307] device (eth0): Activation: successful, device activated.
Oct 10 05:07:40 np0005479822 NetworkManager[3951]: <info>  [1760087260.3321] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 05:07:40 np0005479822 python3[4023]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-dbf0-3472-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:07:50 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:08:10 np0005479822 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2351] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 05:08:25 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:08:25 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2577] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2581] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2591] device (eth1): Activation: successful, device activated.
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2599] manager: startup complete
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2601] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <warn>  [1760087305.2607] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2627] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 systemd[1]: Finished Network Manager Wait Online.
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): canceled DHCP transaction
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): state changed no lease
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2740] policy: auto-activating connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2749] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2751] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2755] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2777] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2824] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2827] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:08:25 np0005479822 NetworkManager[3951]: <info>  [1760087305.2837] device (eth1): Activation: successful, device activated.
Oct 10 05:08:35 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:08:40 np0005479822 systemd[1]: session-3.scope: Deactivated successfully.
Oct 10 05:08:40 np0005479822 systemd[1]: session-3.scope: Consumed 1.829s CPU time.
Oct 10 05:08:40 np0005479822 systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Oct 10 05:08:40 np0005479822 systemd-logind[789]: Removed session 3.
Oct 10 05:09:18 np0005479822 systemd-logind[789]: New session 4 of user zuul.
Oct 10 05:09:18 np0005479822 systemd[1]: Started Session 4 of User zuul.
Oct 10 05:09:19 np0005479822 python3[4133]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:09:19 np0005479822 python3[4206]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087358.7119231-373-218214675868800/source _original_basename=tmpgwmafwp5 follow=False checksum=0edcb8668707f95c4678608a04fc39cdafb654ec backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:09:23 np0005479822 systemd[1]: session-4.scope: Deactivated successfully.
Oct 10 05:09:23 np0005479822 systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Oct 10 05:09:23 np0005479822 systemd-logind[789]: Removed session 4.
Oct 10 05:10:41 np0005479822 systemd[1059]: Created slice User Background Tasks Slice.
Oct 10 05:10:41 np0005479822 systemd[1059]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 05:10:41 np0005479822 systemd[1059]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 05:15:52 np0005479822 systemd-logind[789]: New session 5 of user zuul.
Oct 10 05:15:52 np0005479822 systemd[1]: Started Session 5 of User zuul.
Oct 10 05:15:52 np0005479822 python3[4266]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001cfe-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:53 np0005479822 python3[4295]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:15:53 np0005479822 python3[4321]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:15:53 np0005479822 python3[4347]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:15:54 np0005479822 python3[4373]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:15:54 np0005479822 python3[4399]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:15:54 np0005479822 python3[4399]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 10 05:15:55 np0005479822 python3[4425]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 05:15:55 np0005479822 systemd[1]: Reloading.
Oct 10 05:15:55 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:15:57 np0005479822 python3[4480]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 10 05:15:57 np0005479822 python3[4506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:57 np0005479822 python3[4534]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:58 np0005479822 python3[4562]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:58 np0005479822 python3[4590]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:59 np0005479822 python3[4617]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001d04-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:15:59 np0005479822 python3[4647]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:16:02 np0005479822 systemd[1]: session-5.scope: Deactivated successfully.
Oct 10 05:16:02 np0005479822 systemd[1]: session-5.scope: Consumed 3.745s CPU time.
Oct 10 05:16:02 np0005479822 systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Oct 10 05:16:02 np0005479822 systemd-logind[789]: Removed session 5.
Oct 10 05:16:04 np0005479822 systemd-logind[789]: New session 6 of user zuul.
Oct 10 05:16:04 np0005479822 systemd[1]: Started Session 6 of User zuul.
Oct 10 05:16:04 np0005479822 python3[4683]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 05:16:19 np0005479822 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:16:19 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:16:28 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  Converting 363 SID table entries...
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:16:36 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:16:38 np0005479822 setsebool[4753]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 10 05:16:38 np0005479822 setsebool[4753]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 10 05:16:50 np0005479822 kernel: SELinux:  Converting 366 SID table entries...
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:16:50 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:17:08 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 05:17:08 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:17:08 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:17:08 np0005479822 systemd[1]: Reloading.
Oct 10 05:17:09 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:17:09 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:17:09 np0005479822 systemd[1]: Starting PackageKit Daemon...
Oct 10 05:17:09 np0005479822 systemd[1]: Starting Authorization Manager...
Oct 10 05:17:10 np0005479822 polkitd[6374]: Started polkitd version 0.117
Oct 10 05:17:10 np0005479822 systemd[1]: Started Authorization Manager.
Oct 10 05:17:10 np0005479822 systemd[1]: Started PackageKit Daemon.
Oct 10 05:17:41 np0005479822 python3[18088]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-c8da-0a8f-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:17:42 np0005479822 kernel: evm: overlay not supported
Oct 10 05:17:42 np0005479822 systemd[1059]: Starting D-Bus User Message Bus...
Oct 10 05:17:42 np0005479822 dbus-broker-launch[18506]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 10 05:17:42 np0005479822 dbus-broker-launch[18506]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 10 05:17:42 np0005479822 systemd[1059]: Started D-Bus User Message Bus.
Oct 10 05:17:42 np0005479822 dbus-broker-lau[18506]: Ready
Oct 10 05:17:42 np0005479822 systemd[1059]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 05:17:42 np0005479822 systemd[1059]: Created slice Slice /user.
Oct 10 05:17:42 np0005479822 systemd[1059]: podman-18441.scope: unit configures an IP firewall, but not running as root.
Oct 10 05:17:42 np0005479822 systemd[1059]: (This warning is only shown for the first unit using IP firewalling.)
Oct 10 05:17:42 np0005479822 systemd[1059]: Started podman-18441.scope.
Oct 10 05:17:42 np0005479822 systemd[1059]: Started podman-pause-ba406518.scope.
Oct 10 05:17:43 np0005479822 systemd[1]: session-6.scope: Deactivated successfully.
Oct 10 05:17:43 np0005479822 systemd[1]: session-6.scope: Consumed 1min 1.661s CPU time.
Oct 10 05:17:43 np0005479822 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Oct 10 05:17:43 np0005479822 systemd-logind[789]: Removed session 6.
Oct 10 05:18:01 np0005479822 systemd-logind[789]: New session 7 of user zuul.
Oct 10 05:18:01 np0005479822 systemd[1]: Started Session 7 of User zuul.
Oct 10 05:18:01 np0005479822 python3[24837]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:18:01 np0005479822 python3[24980]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:18:02 np0005479822 python3[25264]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005479822.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 10 05:18:03 np0005479822 python3[25468]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 05:18:03 np0005479822 python3[25710]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:18:04 np0005479822 python3[25932]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087883.4861338-151-36587341816073/source _original_basename=tmp22ygpsbj follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:18:05 np0005479822 python3[26247]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct 10 05:18:05 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 05:18:05 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 05:18:05 np0005479822 systemd-hostnamed[26348]: Changed pretty hostname to 'compute-1'
Oct 10 05:18:05 np0005479822 systemd-hostnamed[26348]: Hostname set to <compute-1> (static)
Oct 10 05:18:05 np0005479822 NetworkManager[3951]: <info>  [1760087885.3177] hostname: static hostname changed from "np0005479822.novalocal" to "compute-1"
Oct 10 05:18:05 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:18:05 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:18:05 np0005479822 systemd[1]: session-7.scope: Deactivated successfully.
Oct 10 05:18:05 np0005479822 systemd[1]: session-7.scope: Consumed 2.600s CPU time.
Oct 10 05:18:05 np0005479822 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Oct 10 05:18:05 np0005479822 systemd-logind[789]: Removed session 7.
Oct 10 05:18:05 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:18:05 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:18:05 np0005479822 systemd[1]: man-db-cache-update.service: Consumed 1min 9.352s CPU time.
Oct 10 05:18:05 np0005479822 systemd[1]: run-r7d12f6832ac24b84b90a692731cf39e8.service: Deactivated successfully.
Oct 10 05:18:15 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:18:35 np0005479822 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 05:20:31 np0005479822 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 10 05:20:31 np0005479822 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 10 05:20:31 np0005479822 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 10 05:20:31 np0005479822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 10 05:21:41 np0005479822 systemd-logind[789]: New session 8 of user zuul.
Oct 10 05:21:41 np0005479822 systemd[1]: Started Session 8 of User zuul.
Oct 10 05:21:41 np0005479822 python3[26607]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:21:43 np0005479822 python3[26723]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:44 np0005479822 python3[26796]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:44 np0005479822 python3[26822]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:44 np0005479822 python3[26895]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:44 np0005479822 python3[26921]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:45 np0005479822 python3[26994]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:45 np0005479822 python3[27020]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:45 np0005479822 python3[27093]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:46 np0005479822 python3[27119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:46 np0005479822 python3[27192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:46 np0005479822 python3[27218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:47 np0005479822 python3[27292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:47 np0005479822 python3[27318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:21:48 np0005479822 python3[27391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:21:53 np0005479822 irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 10 05:21:53 np0005479822 irqbalance[780]: IRQ 27 affinity is now unmanaged
Oct 10 05:21:59 np0005479822 python3[27439]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:22:15 np0005479822 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 05:26:59 np0005479822 systemd[1]: session-8.scope: Deactivated successfully.
Oct 10 05:26:59 np0005479822 systemd[1]: session-8.scope: Consumed 5.604s CPU time.
Oct 10 05:26:59 np0005479822 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Oct 10 05:26:59 np0005479822 systemd-logind[789]: Removed session 8.
Oct 10 05:33:17 np0005479822 systemd-logind[789]: New session 9 of user zuul.
Oct 10 05:33:17 np0005479822 systemd[1]: Started Session 9 of User zuul.
Oct 10 05:33:18 np0005479822 python3.9[27705]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:33:20 np0005479822 python3.9[27886]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:33:28 np0005479822 systemd[1]: session-9.scope: Deactivated successfully.
Oct 10 05:33:28 np0005479822 systemd[1]: session-9.scope: Consumed 8.341s CPU time.
Oct 10 05:33:28 np0005479822 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Oct 10 05:33:28 np0005479822 systemd-logind[789]: Removed session 9.
Oct 10 05:33:43 np0005479822 systemd-logind[789]: New session 10 of user zuul.
Oct 10 05:33:43 np0005479822 systemd[1]: Started Session 10 of User zuul.
Oct 10 05:33:44 np0005479822 python3.9[28099]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 05:33:45 np0005479822 python3.9[28273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:33:46 np0005479822 python3.9[28425]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:33:47 np0005479822 python3.9[28578]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:33:48 np0005479822 python3.9[28730]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:33:49 np0005479822 python3.9[28882]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:33:50 np0005479822 python3.9[29005]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760088829.1543286-178-261762968508380/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:33:51 np0005479822 python3.9[29157]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:33:52 np0005479822 python3.9[29313]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:33:53 np0005479822 python3.9[29463]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:34:00 np0005479822 python3.9[29718]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:34:01 np0005479822 python3.9[29868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:34:02 np0005479822 python3.9[30022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:34:03 np0005479822 python3.9[30180]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:34:04 np0005479822 python3.9[30264]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:34:49 np0005479822 systemd[1]: Reloading.
Oct 10 05:34:49 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:34:49 np0005479822 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 10 05:34:50 np0005479822 systemd[1]: Reloading.
Oct 10 05:34:50 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:34:50 np0005479822 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 10 05:34:50 np0005479822 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 10 05:34:50 np0005479822 systemd[1]: Reloading.
Oct 10 05:34:50 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:34:50 np0005479822 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 10 05:34:50 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:34:50 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:34:50 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:35:59 np0005479822 kernel: SELinux:  Converting 2713 SID table entries...
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:35:59 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:35:59 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 10 05:35:59 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:35:59 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:35:59 np0005479822 systemd[1]: Reloading.
Oct 10 05:35:59 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:36:00 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:36:00 np0005479822 systemd[1]: Starting PackageKit Daemon...
Oct 10 05:36:00 np0005479822 systemd[1]: Started PackageKit Daemon.
Oct 10 05:36:01 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:36:01 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:36:01 np0005479822 systemd[1]: man-db-cache-update.service: Consumed 1.422s CPU time.
Oct 10 05:36:01 np0005479822 systemd[1]: run-r470426eaf19f4e2db9d170b5e5fda398.service: Deactivated successfully.
Oct 10 05:36:01 np0005479822 python3.9[31770]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:03 np0005479822 python3.9[32052]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 05:36:04 np0005479822 python3.9[32204]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 05:36:07 np0005479822 python3.9[32358]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:36:09 np0005479822 python3.9[32510]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 05:36:10 np0005479822 python3.9[32662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:36:14 np0005479822 python3.9[32814]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:36:15 np0005479822 python3.9[32937]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760088970.906032-640-201304920306786/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:36:19 np0005479822 python3.9[33089]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 05:36:20 np0005479822 python3.9[33242]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 05:36:21 np0005479822 python3.9[33400]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 05:36:21 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:36:21 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:36:22 np0005479822 python3.9[33561]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 05:36:23 np0005479822 python3.9[33714]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 05:36:24 np0005479822 python3.9[33872]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 05:36:25 np0005479822 python3.9[34024]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:36:28 np0005479822 python3.9[34177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:36:28 np0005479822 python3.9[34329]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:36:29 np0005479822 python3.9[34452]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088988.3343694-925-119649854274539/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:36:30 np0005479822 python3.9[34604]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:36:30 np0005479822 systemd[1]: Starting Load Kernel Modules...
Oct 10 05:36:30 np0005479822 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 10 05:36:30 np0005479822 kernel: Bridge firewalling registered
Oct 10 05:36:30 np0005479822 systemd-modules-load[34608]: Inserted module 'br_netfilter'
Oct 10 05:36:30 np0005479822 systemd[1]: Finished Load Kernel Modules.
Oct 10 05:36:31 np0005479822 python3.9[34763]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:36:32 np0005479822 python3.9[34886]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088991.052329-994-192510616130717/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:36:33 np0005479822 python3.9[35038]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:36:36 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:36:36 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:36:36 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:36:36 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:36:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:36:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:36:37 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:36:38 np0005479822 python3.9[36363]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:36:39 np0005479822 python3.9[37319]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 05:36:40 np0005479822 python3.9[38028]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:36:41 np0005479822 python3.9[38943]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:41 np0005479822 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 05:36:41 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:36:41 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:36:41 np0005479822 systemd[1]: man-db-cache-update.service: Consumed 5.789s CPU time.
Oct 10 05:36:41 np0005479822 systemd[1]: run-r51d66231484b492bba97dc35587a0e5c.service: Deactivated successfully.
Oct 10 05:36:42 np0005479822 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 05:36:43 np0005479822 python3.9[39589]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:36:43 np0005479822 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 05:36:43 np0005479822 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 05:36:43 np0005479822 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 05:36:43 np0005479822 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 05:36:43 np0005479822 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 05:36:44 np0005479822 python3.9[39751]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 05:36:48 np0005479822 python3.9[39903]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:36:48 np0005479822 systemd[1]: Reloading.
Oct 10 05:36:48 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:36:48 np0005479822 systemd[1]: Starting dnf makecache...
Oct 10 05:36:48 np0005479822 dnf[39942]: Failed determining last makecache time.
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-barbican-42b4c41831408a8e323 107 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 161 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-cinder-1c00d6490d88e436f26ef 161 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-python-stevedore-c4acc5639fd2329372142 179 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-python-cloudkitty-tests-tempest-3961dc 169 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-diskimage-builder-43381184423c185801b5 158 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 177 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-python-designate-tests-tempest-347fdbc 159 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-glance-1fd12c29b339f30fe823e 159 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 154 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-manila-3c01b7181572c95dac462 163 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-python-vmware-nsxlib-458234972d1428ac9 163 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-octavia-ba397f07a7331190208c 157 kB/s | 3.0 kB     00:00
Oct 10 05:36:48 np0005479822 dnf[39942]: delorean-openstack-watcher-c014f81a8647287f6dcc 142 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: delorean-edpm-image-builder-55ba53cf215b14ed95b 152 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 141 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: delorean-openstack-swift-dc98a8463506ac520c469a 140 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: delorean-python-tempestconf-8515371b7cceebd4282 142 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: delorean-openstack-heat-ui-013accbfd179753bc3f0 138 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: CentOS Stream 9 - BaseOS                         71 kB/s | 6.7 kB     00:00
Oct 10 05:36:49 np0005479822 python3.9[40106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:36:49 np0005479822 systemd[1]: Reloading.
Oct 10 05:36:49 np0005479822 dnf[39942]: CentOS Stream 9 - AppStream                      71 kB/s | 6.8 kB     00:00
Oct 10 05:36:49 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:36:49 np0005479822 dnf[39942]: CentOS Stream 9 - CRB                            61 kB/s | 6.6 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: CentOS Stream 9 - Extras packages                64 kB/s | 8.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: dlrn-antelope-testing                            81 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: dlrn-antelope-build-deps                        103 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: centos9-rabbitmq                                 83 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: centos9-storage                                 114 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: centos9-opstools                                112 kB/s | 3.0 kB     00:00
Oct 10 05:36:49 np0005479822 dnf[39942]: NFV SIG OpenvSwitch                             141 kB/s | 3.0 kB     00:00
Oct 10 05:36:50 np0005479822 dnf[39942]: repo-setup-centos-appstream                     215 kB/s | 4.4 kB     00:00
Oct 10 05:36:50 np0005479822 dnf[39942]: repo-setup-centos-baseos                        189 kB/s | 3.9 kB     00:00
Oct 10 05:36:50 np0005479822 dnf[39942]: repo-setup-centos-highavailability              191 kB/s | 3.9 kB     00:00
Oct 10 05:36:50 np0005479822 dnf[39942]: repo-setup-centos-powertools                    208 kB/s | 4.3 kB     00:00
Oct 10 05:36:50 np0005479822 dnf[39942]: Extra Packages for Enterprise Linux 9 - x86_64  200 kB/s |  25 kB     00:00
Oct 10 05:36:50 np0005479822 python3.9[40325]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:50 np0005479822 dnf[39942]: Metadata cache created.
Oct 10 05:36:50 np0005479822 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 10 05:36:50 np0005479822 systemd[1]: Finished dnf makecache.
Oct 10 05:36:50 np0005479822 systemd[1]: dnf-makecache.service: Consumed 1.775s CPU time.
Oct 10 05:36:51 np0005479822 python3.9[40478]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:51 np0005479822 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 10 05:36:52 np0005479822 python3.9[40631]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:54 np0005479822 python3.9[40793]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:36:55 np0005479822 python3.9[40946]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:36:56 np0005479822 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 05:36:56 np0005479822 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 05:36:56 np0005479822 systemd[1]: Stopping Apply Kernel Variables...
Oct 10 05:36:56 np0005479822 systemd[1]: Starting Apply Kernel Variables...
Oct 10 05:36:56 np0005479822 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 05:36:56 np0005479822 systemd[1]: Finished Apply Kernel Variables.
Oct 10 05:36:57 np0005479822 systemd[1]: session-10.scope: Deactivated successfully.
Oct 10 05:36:57 np0005479822 systemd[1]: session-10.scope: Consumed 2min 18.572s CPU time.
Oct 10 05:36:57 np0005479822 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Oct 10 05:36:57 np0005479822 systemd-logind[789]: Removed session 10.
Oct 10 05:37:02 np0005479822 systemd-logind[789]: New session 11 of user zuul.
Oct 10 05:37:02 np0005479822 systemd[1]: Started Session 11 of User zuul.
Oct 10 05:37:03 np0005479822 python3.9[41129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:37:05 np0005479822 python3.9[41285]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 05:37:06 np0005479822 python3.9[41438]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 05:37:07 np0005479822 python3.9[41596]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 05:37:08 np0005479822 python3.9[41756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:37:09 np0005479822 python3.9[41840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 05:37:12 np0005479822 python3.9[42004]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:37:23 np0005479822 kernel: SELinux:  Converting 2724 SID table entries...
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:37:23 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:37:23 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 10 05:37:23 np0005479822 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 10 05:37:24 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:37:25 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:37:25 np0005479822 systemd[1]: Reloading.
Oct 10 05:37:25 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:37:25 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:37:25 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:37:26 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:37:26 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:37:26 np0005479822 systemd[1]: run-r55563bbd2164407681bba82a98a79d56.service: Deactivated successfully.
Oct 10 05:37:29 np0005479822 python3.9[43105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:37:29 np0005479822 systemd[1]: Reloading.
Oct 10 05:37:29 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:37:29 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:37:29 np0005479822 systemd[1]: Starting Open vSwitch Database Unit...
Oct 10 05:37:29 np0005479822 chown[43148]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 10 05:37:29 np0005479822 ovs-ctl[43153]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 10 05:37:29 np0005479822 ovs-ctl[43153]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 10 05:37:29 np0005479822 ovs-ctl[43153]: Starting ovsdb-server [  OK  ]
Oct 10 05:37:29 np0005479822 ovs-vsctl[43202]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 10 05:37:29 np0005479822 ovs-vsctl[43222]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ee0899c1-415d-4aa8-abe8-1240b4e8bf2c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 10 05:37:29 np0005479822 ovs-ctl[43153]: Configuring Open vSwitch system IDs [  OK  ]
Oct 10 05:37:29 np0005479822 ovs-vsctl[43227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 10 05:37:29 np0005479822 ovs-ctl[43153]: Enabling remote OVSDB managers [  OK  ]
Oct 10 05:37:29 np0005479822 systemd[1]: Started Open vSwitch Database Unit.
Oct 10 05:37:29 np0005479822 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 10 05:37:29 np0005479822 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 10 05:37:29 np0005479822 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 10 05:37:30 np0005479822 kernel: openvswitch: Open vSwitch switching datapath
Oct 10 05:37:30 np0005479822 ovs-ctl[43273]: Inserting openvswitch module [  OK  ]
Oct 10 05:37:30 np0005479822 ovs-ctl[43242]: Starting ovs-vswitchd [  OK  ]
Oct 10 05:37:30 np0005479822 ovs-vsctl[43290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 10 05:37:30 np0005479822 ovs-ctl[43242]: Enabling remote OVSDB managers [  OK  ]
Oct 10 05:37:30 np0005479822 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 10 05:37:30 np0005479822 systemd[1]: Starting Open vSwitch...
Oct 10 05:37:30 np0005479822 systemd[1]: Finished Open vSwitch.
Oct 10 05:37:31 np0005479822 python3.9[43442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:37:32 np0005479822 python3.9[43594]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 05:37:33 np0005479822 kernel: SELinux:  Converting 2738 SID table entries...
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:37:33 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:37:34 np0005479822 python3.9[43749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:37:35 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 10 05:37:36 np0005479822 python3.9[43907]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:37:38 np0005479822 python3.9[44060]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:37:39 np0005479822 python3.9[44347]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 05:37:40 np0005479822 python3.9[44497]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:37:41 np0005479822 python3.9[44651]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:37:43 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:37:43 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:37:43 np0005479822 systemd[1]: Reloading.
Oct 10 05:37:43 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:37:43 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:37:43 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:37:43 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:37:43 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:37:43 np0005479822 systemd[1]: run-r87a2c1727f9c41cab47cfb3e9d97f3de.service: Deactivated successfully.
Oct 10 05:37:44 np0005479822 python3.9[44968]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:37:44 np0005479822 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 05:37:44 np0005479822 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 05:37:44 np0005479822 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 05:37:44 np0005479822 systemd[1]: Stopping Network Manager...
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9235] caught SIGTERM, shutting down normally.
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): canceled DHCP transaction
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): state changed no lease
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9251] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 05:37:44 np0005479822 NetworkManager[3951]: <info>  [1760089064.9297] exiting (success)
Oct 10 05:37:44 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:37:44 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:37:44 np0005479822 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 05:37:44 np0005479822 systemd[1]: Stopped Network Manager.
Oct 10 05:37:44 np0005479822 systemd[1]: NetworkManager.service: Consumed 10.557s CPU time, 4.1M memory peak, read 0B from disk, written 21.0K to disk.
Oct 10 05:37:44 np0005479822 systemd[1]: Starting Network Manager...
Oct 10 05:37:44 np0005479822 NetworkManager[44982]: <info>  [1760089064.9976] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 05:37:44 np0005479822 NetworkManager[44982]: <info>  [1760089064.9976] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0032] manager[0x5562e6775090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 05:37:45 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 05:37:45 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0803] hostname: hostname: using hostnamed
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0803] hostname: static hostname changed from (none) to "compute-1"
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0808] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0813] manager[0x5562e6775090]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0813] manager[0x5562e6775090]: rfkill: WWAN hardware radio set enabled
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0831] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0839] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0839] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0840] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0840] manager: Networking is enabled by state file
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0841] settings: Loaded settings plugin: keyfile (internal)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0844] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0865] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0872] dhcp: init: Using DHCP client 'internal'
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0874] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0878] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0881] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0886] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0891] device (eth0): carrier: link connected
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0894] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0897] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0898] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0903] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0907] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0911] device (eth1): carrier: link connected
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0914] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0917] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684) (indicated)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0917] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0921] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0925] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 05:37:45 np0005479822 systemd[1]: Started Network Manager.
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0934] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0946] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0950] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0953] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0957] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0962] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0967] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0969] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0973] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0978] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0980] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.0987] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1008] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1011] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1017] device (lo): Activation: successful, device activated.
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1044] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1045] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1047] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 10 05:37:45 np0005479822 NetworkManager[44982]: <info>  [1760089065.1050] device (eth1): Activation: successful, device activated.
Oct 10 05:37:45 np0005479822 systemd[1]: Starting Network Manager Wait Online...
Oct 10 05:37:45 np0005479822 python3.9[45175]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6189] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6200] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6716] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6747] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6748] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6752] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6759] device (eth0): Activation: successful, device activated.
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6768] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 05:37:46 np0005479822 NetworkManager[44982]: <info>  [1760089066.6772] manager: startup complete
Oct 10 05:37:46 np0005479822 systemd[1]: Finished Network Manager Wait Online.
Oct 10 05:37:52 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:37:52 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:37:52 np0005479822 systemd[1]: Reloading.
Oct 10 05:37:53 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:37:53 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:37:53 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:37:53 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:37:53 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:37:54 np0005479822 systemd[1]: run-r53b25801855644fda0006ea6e3872cd7.service: Deactivated successfully.
Oct 10 05:37:55 np0005479822 python3.9[45656]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:37:56 np0005479822 python3.9[45808]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:37:56 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:37:57 np0005479822 python3.9[45962]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:37:58 np0005479822 python3.9[46114]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:37:58 np0005479822 python3.9[46266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:37:59 np0005479822 python3.9[46418]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:00 np0005479822 python3.9[46570]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:38:01 np0005479822 python3.9[46693]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089079.7816956-648-175202567655004/.source _original_basename=.k1y5k47r follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:02 np0005479822 python3.9[46845]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:02 np0005479822 python3.9[46997]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 10 05:38:03 np0005479822 python3.9[47149]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:06 np0005479822 python3.9[47576]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 10 05:38:07 np0005479822 ansible-async_wrapper.py[47751]: Invoked with j666903082859 300 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.6225617-846-10490564304026/AnsiballZ_edpm_os_net_config.py _
Oct 10 05:38:07 np0005479822 ansible-async_wrapper.py[47754]: Starting module and watcher
Oct 10 05:38:07 np0005479822 ansible-async_wrapper.py[47754]: Start watching 47755 (300)
Oct 10 05:38:07 np0005479822 ansible-async_wrapper.py[47755]: Start module (47755)
Oct 10 05:38:07 np0005479822 ansible-async_wrapper.py[47751]: Return async_wrapper task started.
Oct 10 05:38:07 np0005479822 python3.9[47756]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 10 05:38:08 np0005479822 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 10 05:38:08 np0005479822 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 10 05:38:08 np0005479822 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 10 05:38:08 np0005479822 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 10 05:38:08 np0005479822 kernel: cfg80211: failed to load regulatory.db
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6013] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6034] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6632] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6635] audit: op="connection-add" uuid="7b38e87c-8a1a-4b20-a2bb-a211382e99d6" name="br-ex-br" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6662] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6665] audit: op="connection-add" uuid="147e12d3-cc78-483f-8eaf-60bc5efbc6e3" name="br-ex-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6691] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6695] audit: op="connection-add" uuid="9cd64f5e-42d5-478b-bba7-3758c836d419" name="eth1-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6719] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6722] audit: op="connection-add" uuid="f3c257ab-34f8-4934-9854-2b5be39213a2" name="vlan20-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6746] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6749] audit: op="connection-add" uuid="57782c16-7171-4876-91db-7bdd0f3697df" name="vlan21-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6773] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6776] audit: op="connection-add" uuid="8975fb2c-26ed-4bb6-a672-2c9d601fef11" name="vlan22-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6802] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6806] audit: op="connection-add" uuid="b0b0efba-77b0-45c1-8154-12560bce810e" name="vlan23-port" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6850] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6880] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6884] audit: op="connection-add" uuid="dcc008f2-7af8-482f-b581-a13cb582659a" name="br-ex-if" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6929] audit: op="connection-update" uuid="8d1fd0d1-71da-5534-9141-6178f63cc684" name="ci-private-network" args="connection.port-type,connection.controller,connection.slave-type,connection.master,connection.timestamp,ovs-external-ids.data,ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6951] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6953] audit: op="connection-add" uuid="0f629f31-53ad-4e29-a9f5-d581129f9c7a" name="vlan20-if" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6975] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.6977] audit: op="connection-add" uuid="c574cdc0-53ad-4ec4-aec5-b19e5340147e" name="vlan21-if" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7001] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7003] audit: op="connection-add" uuid="e89dbb83-6269-4a8f-a897-bfe290191540" name="vlan22-if" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7024] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7026] audit: op="connection-add" uuid="080dfc91-6fae-4350-84d2-2d738b5ea6de" name="vlan23-if" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7040] audit: op="connection-delete" uuid="098c32ca-35a5-3746-add5-29d4391ea12b" name="Wired connection 1" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7054] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7067] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7071] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7b38e87c-8a1a-4b20-a2bb-a211382e99d6)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7072] audit: op="connection-activate" uuid="7b38e87c-8a1a-4b20-a2bb-a211382e99d6" name="br-ex-br" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7075] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7082] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7087] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (147e12d3-cc78-483f-8eaf-60bc5efbc6e3)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7089] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7095] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7100] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9cd64f5e-42d5-478b-bba7-3758c836d419)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7102] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7110] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7116] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f3c257ab-34f8-4934-9854-2b5be39213a2)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7118] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7127] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7131] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (57782c16-7171-4876-91db-7bdd0f3697df)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7134] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7141] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7147] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8975fb2c-26ed-4bb6-a672-2c9d601fef11)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7149] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7157] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7162] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b0b0efba-77b0-45c1-8154-12560bce810e)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7163] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7167] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7169] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7177] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7182] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7188] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (dcc008f2-7af8-482f-b581-a13cb582659a)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7189] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7193] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7195] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7197] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7198] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7211] device (eth1): disconnecting for new activation request.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7212] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7216] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7218] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7220] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7224] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7229] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7234] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0f629f31-53ad-4e29-a9f5-d581129f9c7a)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7235] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7239] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7241] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7244] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7247] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7252] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7257] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c574cdc0-53ad-4ec4-aec5-b19e5340147e)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7258] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7261] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7264] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7265] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7269] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7275] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7280] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e89dbb83-6269-4a8f-a897-bfe290191540)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7281] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7286] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7288] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7289] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7293] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7298] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7304] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (080dfc91-6fae-4350-84d2-2d738b5ea6de)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7305] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7308] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7310] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7312] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7314] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7330] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7332] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7337] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7340] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7349] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7353] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7357] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7360] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7362] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7367] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7372] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7375] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7377] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7383] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7388] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7392] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7394] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7398] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7403] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7406] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7408] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7413] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7417] dhcp4 (eth0): canceled DHCP transaction
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7417] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7418] dhcp4 (eth0): state changed no lease
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7419] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7445] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47757 uid=0 result="fail" reason="Device is not activated"
Oct 10 05:38:09 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7478] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7732] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 05:38:09 np0005479822 kernel: ovs-system: entered promiscuous mode
Oct 10 05:38:09 np0005479822 kernel: Timeout policy base is empty
Oct 10 05:38:09 np0005479822 systemd-udevd[47763]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7837] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7842] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7847] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7856] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7862] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7865] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7872] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7878] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7883] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7884] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7885] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7887] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7889] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7890] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7893] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7900] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7904] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7908] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7913] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7916] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7920] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7923] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7928] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7931] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7935] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7938] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7942] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7945] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7950] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7954] device (eth1): state change: ip-config -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7956] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7956] device (eth1): released from controller device eth1
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7962] device (eth1): disconnecting for new activation request.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7963] audit: op="connection-activate" uuid="8d1fd0d1-71da-5534-9141-6178f63cc684" name="ci-private-network" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.7967] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8031] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8032] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8042] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8051] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8055] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8060] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8085] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8088] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8104] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8107] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8116] device (eth1): Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 kernel: br-ex: entered promiscuous mode
Oct 10 05:38:09 np0005479822 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 10 05:38:09 np0005479822 kernel: vlan22: entered promiscuous mode
Oct 10 05:38:09 np0005479822 systemd-udevd[47762]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8260] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8271] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 kernel: vlan23: entered promiscuous mode
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8317] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8322] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8330] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 systemd-udevd[47761]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:38:09 np0005479822 kernel: vlan20: entered promiscuous mode
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8379] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 10 05:38:09 np0005479822 kernel: vlan21: entered promiscuous mode
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8407] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8427] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8429] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8437] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8451] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8459] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8477] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8494] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8499] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8518] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8520] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8530] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8538] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8544] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8551] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8564] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8598] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8599] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 05:38:09 np0005479822 NetworkManager[44982]: <info>  [1760089089.8604] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 05:38:10 np0005479822 NetworkManager[44982]: <info>  [1760089090.9907] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.2087] checkpoint[0x5562e674c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.2093] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 python3.9[48120]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=status _async_dir=/root/.ansible_async
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.5796] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.5805] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.7684] audit: op="networking-control" arg="global-dns-configuration" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.7777] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.7967] audit: op="networking-control" arg="global-dns-configuration" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.7990] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.9550] checkpoint[0x5562e674ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 10 05:38:11 np0005479822 NetworkManager[44982]: <info>  [1760089091.9555] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 05:38:12 np0005479822 ansible-async_wrapper.py[47755]: Module complete (47755)
Oct 10 05:38:12 np0005479822 ansible-async_wrapper.py[47754]: Done in kid B.
Oct 10 05:38:15 np0005479822 python3.9[48227]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=status _async_dir=/root/.ansible_async
Oct 10 05:38:15 np0005479822 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 05:38:15 np0005479822 python3.9[48328]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 05:38:16 np0005479822 python3.9[48480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:38:16 np0005479822 python3.9[48603]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089095.8391328-927-213698857352948/.source.returncode _original_basename=.30gys562 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:17 np0005479822 python3.9[48755]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:38:18 np0005479822 python3.9[48879]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089097.257864-975-219272417300736/.source.cfg _original_basename=.h6t_at2r follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:19 np0005479822 python3.9[49031]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:38:19 np0005479822 systemd[1]: Reloading Network Manager...
Oct 10 05:38:19 np0005479822 NetworkManager[44982]: <info>  [1760089099.5880] audit: op="reload" arg="0" pid=49035 uid=0 result="success"
Oct 10 05:38:19 np0005479822 NetworkManager[44982]: <info>  [1760089099.5886] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 10 05:38:19 np0005479822 systemd[1]: Reloaded Network Manager.
Oct 10 05:38:20 np0005479822 systemd[1]: session-11.scope: Deactivated successfully.
Oct 10 05:38:20 np0005479822 systemd[1]: session-11.scope: Consumed 52.968s CPU time.
Oct 10 05:38:20 np0005479822 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Oct 10 05:38:20 np0005479822 systemd-logind[789]: Removed session 11.
Oct 10 05:38:25 np0005479822 systemd-logind[789]: New session 12 of user zuul.
Oct 10 05:38:25 np0005479822 systemd[1]: Started Session 12 of User zuul.
Oct 10 05:38:26 np0005479822 python3.9[49219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:38:27 np0005479822 python3.9[49373]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:38:29 np0005479822 python3.9[49567]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:38:29 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 05:38:29 np0005479822 systemd[1]: session-12.scope: Deactivated successfully.
Oct 10 05:38:29 np0005479822 systemd[1]: session-12.scope: Consumed 2.711s CPU time.
Oct 10 05:38:29 np0005479822 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Oct 10 05:38:29 np0005479822 systemd-logind[789]: Removed session 12.
Oct 10 05:38:35 np0005479822 systemd-logind[789]: New session 13 of user zuul.
Oct 10 05:38:35 np0005479822 systemd[1]: Started Session 13 of User zuul.
Oct 10 05:38:36 np0005479822 python3.9[49750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:38:37 np0005479822 python3.9[49904]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:38:38 np0005479822 python3.9[50061]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:38:39 np0005479822 python3.9[50145]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:38:41 np0005479822 python3.9[50299]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:38:42 np0005479822 python3.9[50494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:43 np0005479822 python3.9[50646]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:38:44 np0005479822 systemd[1]: var-lib-containers-storage-overlay-compat2852767381-merged.mount: Deactivated successfully.
Oct 10 05:38:44 np0005479822 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2170325221-merged.mount: Deactivated successfully.
Oct 10 05:38:44 np0005479822 podman[50647]: 2025-10-10 09:38:44.049219817 +0000 UTC m=+0.076965870 system refresh
Oct 10 05:38:45 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:38:45 np0005479822 python3.9[50808]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:38:45 np0005479822 python3.9[50931]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089124.2955315-198-187223606016097/.source.json follow=False _original_basename=podman_network_config.j2 checksum=285b71da672ecff99ec1ea5d612fdfcd7171c48f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:38:46 np0005479822 python3.9[51083]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:38:47 np0005479822 python3.9[51206]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089126.0165043-243-220862171723225/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:38:48 np0005479822 python3.9[51358]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:38:48 np0005479822 python3.9[51510]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:38:49 np0005479822 python3.9[51662]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:38:50 np0005479822 python3.9[51814]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:38:51 np0005479822 python3.9[51966]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:38:53 np0005479822 python3.9[52119]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:38:54 np0005479822 python3.9[52273]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:38:55 np0005479822 python3.9[52425]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:38:56 np0005479822 python3.9[52577]: ansible-service_facts Invoked
Oct 10 05:38:56 np0005479822 network[52594]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:38:56 np0005479822 network[52595]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:38:56 np0005479822 network[52596]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:39:02 np0005479822 python3.9[53050]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:39:05 np0005479822 python3.9[53203]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 05:39:07 np0005479822 python3.9[53355]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:07 np0005479822 python3.9[53480]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089146.6320033-640-195135685332680/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:08 np0005479822 python3.9[53634]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:09 np0005479822 python3.9[53759]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089148.227183-685-137583090722969/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:11 np0005479822 python3.9[53913]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:12 np0005479822 python3.9[54067]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:39:14 np0005479822 python3.9[54151]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:39:16 np0005479822 python3.9[54305]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:39:17 np0005479822 python3.9[54389]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:39:17 np0005479822 chronyd[795]: chronyd exiting
Oct 10 05:39:17 np0005479822 systemd[1]: Stopping NTP client/server...
Oct 10 05:39:17 np0005479822 systemd[1]: chronyd.service: Deactivated successfully.
Oct 10 05:39:17 np0005479822 systemd[1]: Stopped NTP client/server.
Oct 10 05:39:17 np0005479822 systemd[1]: Starting NTP client/server...
Oct 10 05:39:17 np0005479822 chronyd[54397]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 05:39:17 np0005479822 chronyd[54397]: Frequency -26.925 +/- 0.209 ppm read from /var/lib/chrony/drift
Oct 10 05:39:17 np0005479822 chronyd[54397]: Loaded seccomp filter (level 2)
Oct 10 05:39:17 np0005479822 systemd[1]: Started NTP client/server.
Oct 10 05:39:18 np0005479822 systemd[1]: session-13.scope: Deactivated successfully.
Oct 10 05:39:18 np0005479822 systemd[1]: session-13.scope: Consumed 29.667s CPU time.
Oct 10 05:39:18 np0005479822 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Oct 10 05:39:18 np0005479822 systemd-logind[789]: Removed session 13.
Oct 10 05:39:24 np0005479822 systemd-logind[789]: New session 14 of user zuul.
Oct 10 05:39:24 np0005479822 systemd[1]: Started Session 14 of User zuul.
Oct 10 05:39:24 np0005479822 python3.9[54578]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:25 np0005479822 python3.9[54730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:26 np0005479822 python3.9[54853]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089165.1204658-63-11523828820517/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:26 np0005479822 systemd[1]: session-14.scope: Deactivated successfully.
Oct 10 05:39:26 np0005479822 systemd[1]: session-14.scope: Consumed 1.741s CPU time.
Oct 10 05:39:26 np0005479822 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Oct 10 05:39:26 np0005479822 systemd-logind[789]: Removed session 14.
Oct 10 05:39:32 np0005479822 systemd-logind[789]: New session 15 of user zuul.
Oct 10 05:39:32 np0005479822 systemd[1]: Started Session 15 of User zuul.
Oct 10 05:39:33 np0005479822 python3.9[55031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:39:34 np0005479822 python3.9[55187]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:35 np0005479822 python3.9[55362]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:36 np0005479822 python3.9[55485]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760089174.755317-84-203807779769119/.source.json _original_basename=.60nuqqxp follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:37 np0005479822 python3.9[55637]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:38 np0005479822 python3.9[55760]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089176.8322256-153-8379793235791/.source _original_basename=.z3qttai8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:38 np0005479822 python3.9[55912]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:39:39 np0005479822 python3.9[56064]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:40 np0005479822 python3.9[56187]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089179.2261026-225-2590318932538/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:39:41 np0005479822 python3.9[56339]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:41 np0005479822 python3.9[56462]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089180.6732674-225-248432976007820/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:39:42 np0005479822 python3.9[56614]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:43 np0005479822 python3.9[56766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:44 np0005479822 python3.9[56889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089182.958253-336-97552927995313/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:44 np0005479822 python3.9[57041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:45 np0005479822 python3.9[57164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089184.332451-382-152919058965498/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:46 np0005479822 python3.9[57316]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:39:46 np0005479822 systemd[1]: Reloading.
Oct 10 05:39:46 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:39:46 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:39:46 np0005479822 systemd[1]: Reloading.
Oct 10 05:39:47 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:39:47 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:39:47 np0005479822 systemd[1]: Starting EDPM Container Shutdown...
Oct 10 05:39:47 np0005479822 systemd[1]: Finished EDPM Container Shutdown.
Oct 10 05:39:47 np0005479822 python3.9[57543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:48 np0005479822 python3.9[57666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089187.4173167-450-258700906599969/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:49 np0005479822 python3.9[57818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:39:50 np0005479822 python3.9[57941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089188.8319273-495-96657470353741/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:39:51 np0005479822 python3.9[58093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:39:51 np0005479822 systemd[1]: Reloading.
Oct 10 05:39:51 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:39:51 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:39:52 np0005479822 systemd[1]: Reloading.
Oct 10 05:39:52 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:39:52 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:39:52 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 05:39:52 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 05:39:52 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 05:39:52 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 05:39:53 np0005479822 python3.9[58322]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:39:53 np0005479822 network[58339]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:39:53 np0005479822 network[58340]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:39:53 np0005479822 network[58341]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:39:59 np0005479822 python3.9[58605]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:39:59 np0005479822 systemd[1]: Reloading.
Oct 10 05:39:59 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:39:59 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:39:59 np0005479822 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 10 05:39:59 np0005479822 iptables.init[58645]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 10 05:40:00 np0005479822 iptables.init[58645]: iptables: Flushing firewall rules: [  OK  ]
Oct 10 05:40:00 np0005479822 systemd[1]: iptables.service: Deactivated successfully.
Oct 10 05:40:00 np0005479822 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 10 05:40:00 np0005479822 python3.9[58841]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:40:01 np0005479822 python3.9[58995]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:40:02 np0005479822 systemd[1]: Reloading.
Oct 10 05:40:02 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:40:02 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:40:02 np0005479822 systemd[1]: Starting Netfilter Tables...
Oct 10 05:40:02 np0005479822 systemd[1]: Finished Netfilter Tables.
Oct 10 05:40:03 np0005479822 python3.9[59188]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:40:04 np0005479822 python3.9[59341]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:05 np0005479822 python3.9[59466]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089204.0707862-702-138017960649119/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:06 np0005479822 python3.9[59617]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:40:31 np0005479822 systemd[1]: session-15.scope: Deactivated successfully.
Oct 10 05:40:31 np0005479822 systemd[1]: session-15.scope: Consumed 23.750s CPU time.
Oct 10 05:40:31 np0005479822 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Oct 10 05:40:31 np0005479822 systemd-logind[789]: Removed session 15.
Oct 10 05:40:44 np0005479822 systemd-logind[789]: New session 16 of user zuul.
Oct 10 05:40:44 np0005479822 systemd[1]: Started Session 16 of User zuul.
Oct 10 05:40:45 np0005479822 python3.9[59810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:40:46 np0005479822 python3.9[59966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:47 np0005479822 python3.9[60141]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:47 np0005479822 python3.9[60219]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.akvl8ywv recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:48 np0005479822 python3.9[60371]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:49 np0005479822 python3.9[60449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.axm96psf recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:50 np0005479822 python3.9[60601]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:40:51 np0005479822 python3.9[60753]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:51 np0005479822 python3.9[60831]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:40:52 np0005479822 python3.9[60983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:52 np0005479822 python3.9[61061]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:40:53 np0005479822 python3.9[61213]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:54 np0005479822 python3.9[61365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:54 np0005479822 python3.9[61443]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:55 np0005479822 python3.9[61595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:56 np0005479822 python3.9[61673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:40:57 np0005479822 python3.9[61825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:40:57 np0005479822 systemd[1]: Reloading.
Oct 10 05:40:57 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:40:57 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:40:58 np0005479822 python3.9[62014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:40:59 np0005479822 python3.9[62092]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:00 np0005479822 python3.9[62244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:00 np0005479822 python3.9[62322]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:01 np0005479822 python3.9[62474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:41:01 np0005479822 systemd[1]: Reloading.
Oct 10 05:41:01 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:41:01 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:41:01 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 05:41:01 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 05:41:01 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 05:41:01 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 05:41:02 np0005479822 python3.9[62666]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:41:02 np0005479822 network[62683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:41:02 np0005479822 network[62684]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:41:02 np0005479822 network[62685]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:41:07 np0005479822 python3.9[62948]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:08 np0005479822 python3.9[63026]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:09 np0005479822 python3.9[63178]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:10 np0005479822 python3.9[63330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:10 np0005479822 python3.9[63453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089269.5123215-609-199794824050836/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:11 np0005479822 python3.9[63605]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 05:41:11 np0005479822 systemd[1]: Starting Time & Date Service...
Oct 10 05:41:12 np0005479822 systemd[1]: Started Time & Date Service.
Oct 10 05:41:13 np0005479822 python3.9[63761]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:13 np0005479822 python3.9[63913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:14 np0005479822 python3.9[64036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089273.3690977-714-61199807161832/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:15 np0005479822 python3.9[64188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:16 np0005479822 python3.9[64311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089274.8935711-759-167202637791233/.source.yaml _original_basename=.em7r3cqh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:16 np0005479822 python3.9[64463]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:17 np0005479822 python3.9[64586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089276.3250012-804-104701402946345/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:18 np0005479822 python3.9[64738]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:41:19 np0005479822 python3.9[64891]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:41:20 np0005479822 python3[65044]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 05:41:21 np0005479822 python3.9[65196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:22 np0005479822 python3.9[65319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089280.8428023-921-203022338466690/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:23 np0005479822 python3.9[65471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:23 np0005479822 python3.9[65594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089282.4303887-966-162624839359187/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:24 np0005479822 python3.9[65746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:25 np0005479822 python3.9[65869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089283.9711156-1012-242607318364306/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:26 np0005479822 python3.9[66021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:26 np0005479822 python3.9[66144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089285.566291-1057-238607967586954/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:27 np0005479822 chronyd[54397]: Selected source 172.97.210.214 (pool.ntp.org)
Oct 10 05:41:27 np0005479822 python3.9[66296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:41:28 np0005479822 python3.9[66419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089287.0738225-1101-120118822970884/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:29 np0005479822 python3.9[66571]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:30 np0005479822 python3.9[66723]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:41:31 np0005479822 python3.9[66882]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:32 np0005479822 python3.9[67035]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:33 np0005479822 python3.9[67187]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:33 np0005479822 python3.9[67339]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 05:41:34 np0005479822 python3.9[67492]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 05:41:35 np0005479822 systemd[1]: session-16.scope: Deactivated successfully.
Oct 10 05:41:35 np0005479822 systemd[1]: session-16.scope: Consumed 39.781s CPU time.
Oct 10 05:41:35 np0005479822 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Oct 10 05:41:35 np0005479822 systemd-logind[789]: Removed session 16.
Oct 10 05:41:41 np0005479822 systemd-logind[789]: New session 17 of user zuul.
Oct 10 05:41:41 np0005479822 systemd[1]: Started Session 17 of User zuul.
Oct 10 05:41:42 np0005479822 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 05:41:42 np0005479822 python3.9[67673]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 05:41:43 np0005479822 python3.9[67827]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:41:44 np0005479822 python3.9[67979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:41:45 np0005479822 python3.9[68131]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=#012 create=True mode=0644 path=/tmp/ansible.nf09jo51 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:46 np0005479822 python3.9[68283]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.nf09jo51' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:41:47 np0005479822 python3.9[68437]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.nf09jo51 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:41:47 np0005479822 systemd[1]: session-17.scope: Deactivated successfully.
Oct 10 05:41:47 np0005479822 systemd[1]: session-17.scope: Consumed 4.355s CPU time.
Oct 10 05:41:47 np0005479822 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Oct 10 05:41:47 np0005479822 systemd-logind[789]: Removed session 17.
Oct 10 05:41:53 np0005479822 systemd-logind[789]: New session 18 of user zuul.
Oct 10 05:41:53 np0005479822 systemd[1]: Started Session 18 of User zuul.
Oct 10 05:41:54 np0005479822 python3.9[68616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:41:55 np0005479822 python3.9[68772]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 05:41:56 np0005479822 python3.9[68926]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:41:57 np0005479822 python3.9[69079]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:41:58 np0005479822 python3.9[69232]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:41:59 np0005479822 python3.9[69386]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:42:00 np0005479822 python3.9[69541]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:42:01 np0005479822 systemd[1]: session-18.scope: Deactivated successfully.
Oct 10 05:42:01 np0005479822 systemd[1]: session-18.scope: Consumed 5.711s CPU time.
Oct 10 05:42:01 np0005479822 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Oct 10 05:42:01 np0005479822 systemd-logind[789]: Removed session 18.
Oct 10 05:42:06 np0005479822 systemd-logind[789]: New session 19 of user zuul.
Oct 10 05:42:06 np0005479822 systemd[1]: Started Session 19 of User zuul.
Oct 10 05:42:07 np0005479822 python3.9[69719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:42:08 np0005479822 python3.9[69875]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:42:09 np0005479822 python3.9[69959]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 05:42:11 np0005479822 python3.9[70110]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:42:13 np0005479822 python3.9[70261]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 05:42:14 np0005479822 python3.9[70411]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:42:14 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:42:14 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:42:14 np0005479822 python3.9[70562]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:42:15 np0005479822 systemd[1]: session-19.scope: Deactivated successfully.
Oct 10 05:42:15 np0005479822 systemd[1]: session-19.scope: Consumed 6.794s CPU time.
Oct 10 05:42:15 np0005479822 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Oct 10 05:42:15 np0005479822 systemd-logind[789]: Removed session 19.
Oct 10 05:42:23 np0005479822 systemd-logind[789]: New session 20 of user zuul.
Oct 10 05:42:23 np0005479822 systemd[1]: Started Session 20 of User zuul.
Oct 10 05:42:29 np0005479822 python3[71329]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:42:31 np0005479822 python3[71424]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 05:42:33 np0005479822 python3[71451]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:42:33 np0005479822 python3[71477]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:42:33 np0005479822 kernel: loop: module loaded
Oct 10 05:42:33 np0005479822 kernel: loop3: detected capacity change from 0 to 41943040
Oct 10 05:42:34 np0005479822 python3[71512]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:42:34 np0005479822 lvm[71515]: PV /dev/loop3 not used.
Oct 10 05:42:34 np0005479822 lvm[71517]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:42:34 np0005479822 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 10 05:42:34 np0005479822 lvm[71527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:42:34 np0005479822 lvm[71527]: VG ceph_vg0 finished
Oct 10 05:42:34 np0005479822 lvm[71525]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct 10 05:42:34 np0005479822 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 10 05:42:34 np0005479822 python3[71605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 05:42:35 np0005479822 python3[71678]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089354.5783622-33484-211800466340561/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:42:36 np0005479822 python3[71728]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:42:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:42:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:42:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:42:36 np0005479822 systemd[1]: Starting Ceph OSD losetup...
Oct 10 05:42:36 np0005479822 bash[71768]: /dev/loop3: [64513]:4555204 (/var/lib/ceph-osd-0.img)
Oct 10 05:42:36 np0005479822 systemd[1]: Finished Ceph OSD losetup.
Oct 10 05:42:36 np0005479822 lvm[71769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:42:36 np0005479822 lvm[71769]: VG ceph_vg0 finished
Oct 10 05:42:38 np0005479822 python3[71793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:43:00 np0005479822 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 05:44:04 np0005479822 systemd-logind[789]: New session 21 of user ceph-admin.
Oct 10 05:44:04 np0005479822 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 05:44:04 np0005479822 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 05:44:04 np0005479822 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 05:44:05 np0005479822 systemd[1]: Starting User Manager for UID 42477...
Oct 10 05:44:05 np0005479822 systemd-logind[789]: New session 23 of user ceph-admin.
Oct 10 05:44:05 np0005479822 systemd[71841]: Queued start job for default target Main User Target.
Oct 10 05:44:05 np0005479822 systemd[71841]: Created slice User Application Slice.
Oct 10 05:44:05 np0005479822 systemd[71841]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 05:44:05 np0005479822 systemd[71841]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 05:44:05 np0005479822 systemd[71841]: Reached target Paths.
Oct 10 05:44:05 np0005479822 systemd[71841]: Reached target Timers.
Oct 10 05:44:05 np0005479822 systemd[71841]: Starting D-Bus User Message Bus Socket...
Oct 10 05:44:05 np0005479822 systemd[71841]: Starting Create User's Volatile Files and Directories...
Oct 10 05:44:05 np0005479822 systemd[71841]: Listening on D-Bus User Message Bus Socket.
Oct 10 05:44:05 np0005479822 systemd[71841]: Reached target Sockets.
Oct 10 05:44:05 np0005479822 systemd[71841]: Finished Create User's Volatile Files and Directories.
Oct 10 05:44:05 np0005479822 systemd[71841]: Reached target Basic System.
Oct 10 05:44:05 np0005479822 systemd[71841]: Reached target Main User Target.
Oct 10 05:44:05 np0005479822 systemd[71841]: Startup finished in 314ms.
Oct 10 05:44:05 np0005479822 systemd[1]: Started User Manager for UID 42477.
Oct 10 05:44:05 np0005479822 systemd[1]: Started Session 21 of User ceph-admin.
Oct 10 05:44:05 np0005479822 systemd[1]: Started Session 23 of User ceph-admin.
Oct 10 05:44:05 np0005479822 systemd-logind[789]: New session 24 of user ceph-admin.
Oct 10 05:44:05 np0005479822 systemd[1]: Started Session 24 of User ceph-admin.
Oct 10 05:44:06 np0005479822 systemd-logind[789]: New session 25 of user ceph-admin.
Oct 10 05:44:06 np0005479822 systemd[1]: Started Session 25 of User ceph-admin.
Oct 10 05:44:06 np0005479822 systemd-logind[789]: New session 26 of user ceph-admin.
Oct 10 05:44:06 np0005479822 systemd[1]: Started Session 26 of User ceph-admin.
Oct 10 05:44:06 np0005479822 systemd-logind[789]: New session 27 of user ceph-admin.
Oct 10 05:44:06 np0005479822 systemd[1]: Started Session 27 of User ceph-admin.
Oct 10 05:44:07 np0005479822 systemd-logind[789]: New session 28 of user ceph-admin.
Oct 10 05:44:07 np0005479822 systemd[1]: Started Session 28 of User ceph-admin.
Oct 10 05:44:07 np0005479822 systemd-logind[789]: New session 29 of user ceph-admin.
Oct 10 05:44:07 np0005479822 systemd[1]: Started Session 29 of User ceph-admin.
Oct 10 05:44:08 np0005479822 systemd-logind[789]: New session 30 of user ceph-admin.
Oct 10 05:44:08 np0005479822 systemd[1]: Started Session 30 of User ceph-admin.
Oct 10 05:44:08 np0005479822 systemd-logind[789]: New session 31 of user ceph-admin.
Oct 10 05:44:08 np0005479822 systemd[1]: Started Session 31 of User ceph-admin.
Oct 10 05:44:09 np0005479822 systemd-logind[789]: New session 32 of user ceph-admin.
Oct 10 05:44:09 np0005479822 systemd[1]: Started Session 32 of User ceph-admin.
Oct 10 05:44:10 np0005479822 systemd-logind[789]: New session 33 of user ceph-admin.
Oct 10 05:44:10 np0005479822 systemd[1]: Started Session 33 of User ceph-admin.
Oct 10 05:44:10 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:11 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:11 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:11 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:12 np0005479822 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72413 (sysctl)
Oct 10 05:44:12 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:12 np0005479822 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 10 05:44:12 np0005479822 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 10 05:44:13 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:14 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:14 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:16 np0005479822 systemd[1]: var-lib-containers-storage-overlay-compat1005408732-lower\x2dmapped.mount: Deactivated successfully.
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.719911895 +0000 UTC m=+16.533105867 container create ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 05:44:30 np0005479822 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 10 05:44:30 np0005479822 systemd[1]: Started libpod-conmon-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope.
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.706543885 +0000 UTC m=+16.519737887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:30 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.835985177 +0000 UTC m=+16.649179229 container init ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.848291156 +0000 UTC m=+16.661485168 container start ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.853205559 +0000 UTC m=+16.666399571 container attach ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 05:44:30 np0005479822 hopeful_lalande[72653]: 167 167
Oct 10 05:44:30 np0005479822 systemd[1]: libpod-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope: Deactivated successfully.
Oct 10 05:44:30 np0005479822 conmon[72653]: conmon ab6e42155a4eaa161c43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope/container/memory.events
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.859701569 +0000 UTC m=+16.672895581 container died ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:30 np0005479822 systemd[1]: var-lib-containers-storage-overlay-d9a28920cae51ace91533454b6d32f787a3d7cac527fa8157e3e696f4a292469-merged.mount: Deactivated successfully.
Oct 10 05:44:30 np0005479822 podman[72592]: 2025-10-10 09:44:30.909294043 +0000 UTC m=+16.722488045 container remove ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:44:30 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:30 np0005479822 systemd[1]: libpod-conmon-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope: Deactivated successfully.
Oct 10 05:44:31 np0005479822 podman[72678]: 2025-10-10 09:44:31.109658103 +0000 UTC m=+0.053168651 container create 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:44:31 np0005479822 systemd[1]: Started libpod-conmon-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope.
Oct 10 05:44:31 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:31 np0005479822 podman[72678]: 2025-10-10 09:44:31.084384386 +0000 UTC m=+0.027894924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:31 np0005479822 podman[72678]: 2025-10-10 09:44:31.195720701 +0000 UTC m=+0.139231259 container init 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:31 np0005479822 podman[72678]: 2025-10-10 09:44:31.208306697 +0000 UTC m=+0.151817245 container start 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:44:31 np0005479822 podman[72678]: 2025-10-10 09:44:31.212552821 +0000 UTC m=+0.156063399 container attach 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]: [
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:    {
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "available": false,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "being_replaced": false,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "ceph_device_lvm": false,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "lsm_data": {},
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "lvs": [],
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "path": "/dev/sr0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "rejected_reasons": [
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "Insufficient space (<5GB)",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "Has a FileSystem"
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        ],
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        "sys_api": {
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "actuators": null,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "device_nodes": [
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:                "sr0"
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            ],
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "devname": "sr0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "human_readable_size": "482.00 KB",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "id_bus": "ata",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "model": "QEMU DVD-ROM",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "nr_requests": "2",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "parent": "/dev/sr0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "partitions": {},
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "path": "/dev/sr0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "removable": "1",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "rev": "2.5+",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "ro": "0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "rotational": "0",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "sas_address": "",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "sas_device_handle": "",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "scheduler_mode": "mq-deadline",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "sectors": 0,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "sectorsize": "2048",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "size": 493568.0,
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "support_discard": "2048",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "type": "disk",
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:            "vendor": "QEMU"
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:        }
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]:    }
Oct 10 05:44:32 np0005479822 pensive_jennings[72694]: ]
Oct 10 05:44:32 np0005479822 systemd[1]: libpod-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Deactivated successfully.
Oct 10 05:44:32 np0005479822 systemd[1]: libpod-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Consumed 1.009s CPU time.
Oct 10 05:44:32 np0005479822 podman[73651]: 2025-10-10 09:44:32.239844167 +0000 UTC m=+0.032397384 container died 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:32 np0005479822 systemd[1]: var-lib-containers-storage-overlay-6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f-merged.mount: Deactivated successfully.
Oct 10 05:44:32 np0005479822 podman[73651]: 2025-10-10 09:44:32.292373458 +0000 UTC m=+0.084926645 container remove 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 05:44:32 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:32 np0005479822 systemd[1]: libpod-conmon-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Deactivated successfully.
Oct 10 05:44:35 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:35 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.020065258 +0000 UTC m=+0.057219529 container create 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 05:44:36 np0005479822 systemd[1]: Started libpod-conmon-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope.
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:35.992143895 +0000 UTC m=+0.029298246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:36 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.124044778 +0000 UTC m=+0.161199069 container init 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.135006218 +0000 UTC m=+0.172160499 container start 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.139645152 +0000 UTC m=+0.176799443 container attach 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 05:44:36 np0005479822 gallant_tesla[74676]: 167 167
Oct 10 05:44:36 np0005479822 systemd[1]: libpod-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope: Deactivated successfully.
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.141370003 +0000 UTC m=+0.178524264 container died 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 10 05:44:36 np0005479822 podman[74659]: 2025-10-10 09:44:36.180923066 +0000 UTC m=+0.218077317 container remove 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:44:36 np0005479822 systemd[1]: libpod-conmon-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope: Deactivated successfully.
Oct 10 05:44:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:36 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:36 np0005479822 systemd[1]: Reached target All Ceph clusters and services.
Oct 10 05:44:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:36 np0005479822 systemd[1]: Reached target Ceph cluster 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:44:37 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:37 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:37 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:37 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:37 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:37 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:37 np0005479822 systemd[1]: Created slice Slice /system/ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:44:37 np0005479822 systemd[1]: Reached target System Time Set.
Oct 10 05:44:37 np0005479822 systemd[1]: Reached target System Time Synchronized.
Oct 10 05:44:37 np0005479822 systemd[1]: Starting Ceph crash.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:44:37 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:37 np0005479822 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 05:44:37 np0005479822 podman[74933]: 2025-10-10 09:44:37.829909379 +0000 UTC m=+0.025953697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:37 np0005479822 podman[74933]: 2025-10-10 09:44:37.987630385 +0000 UTC m=+0.183674613 container create 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 05:44:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:38 np0005479822 podman[74933]: 2025-10-10 09:44:38.085831417 +0000 UTC m=+0.281875725 container init 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 05:44:38 np0005479822 podman[74933]: 2025-10-10 09:44:38.09553124 +0000 UTC m=+0.291575518 container start 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:44:38 np0005479822 bash[74933]: 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8
Oct 10 05:44:38 np0005479822 systemd[1]: Started Ceph crash.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.302+0000 7fa8e7fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.302+0000 7fa8e7fff640 -1 AuthRegistry(0x7fa8e8069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.304+0000 7fa8e7fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.304+0000 7fa8e7fff640 -1 AuthRegistry(0x7fa8e7ffdff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.307+0000 7fa8e6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.307+0000 7fa8e7fff640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 10 05:44:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 10 05:44:38 np0005479822 podman[75057]: 2025-10-10 09:44:38.929896554 +0000 UTC m=+0.082947328 container create ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 05:44:38 np0005479822 podman[75057]: 2025-10-10 09:44:38.895966065 +0000 UTC m=+0.049016919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:38 np0005479822 systemd[1]: Started libpod-conmon-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope.
Oct 10 05:44:39 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:39 np0005479822 podman[75057]: 2025-10-10 09:44:39.049574082 +0000 UTC m=+0.202624916 container init ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:39 np0005479822 podman[75057]: 2025-10-10 09:44:39.061225741 +0000 UTC m=+0.214276525 container start ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:39 np0005479822 podman[75057]: 2025-10-10 09:44:39.065897978 +0000 UTC m=+0.218948832 container attach ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:39 np0005479822 systemd[1]: libpod-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope: Deactivated successfully.
Oct 10 05:44:39 np0005479822 jovial_goldberg[75074]: 167 167
Oct 10 05:44:39 np0005479822 conmon[75074]: conmon ca9c43686deef50ef109 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope/container/memory.events
Oct 10 05:44:39 np0005479822 podman[75057]: 2025-10-10 09:44:39.07217775 +0000 UTC m=+0.225228534 container died ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:39 np0005479822 systemd[1]: var-lib-containers-storage-overlay-a8c4df69c2c7f5c08746ab4b6f3433f4d25f39c8376e1ef6529a9136e02ffa35-merged.mount: Deactivated successfully.
Oct 10 05:44:39 np0005479822 podman[75057]: 2025-10-10 09:44:39.127225985 +0000 UTC m=+0.280276769 container remove ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:39 np0005479822 systemd[1]: libpod-conmon-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope: Deactivated successfully.
Oct 10 05:44:39 np0005479822 podman[75097]: 2025-10-10 09:44:39.393174755 +0000 UTC m=+0.076400787 container create 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:39 np0005479822 systemd[1]: Started libpod-conmon-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope.
Oct 10 05:44:39 np0005479822 podman[75097]: 2025-10-10 09:44:39.356547538 +0000 UTC m=+0.039773630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:39 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:39 np0005479822 podman[75097]: 2025-10-10 09:44:39.494305042 +0000 UTC m=+0.177531124 container init 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:44:39 np0005479822 podman[75097]: 2025-10-10 09:44:39.506367663 +0000 UTC m=+0.189593685 container start 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:39 np0005479822 podman[75097]: 2025-10-10 09:44:39.511310698 +0000 UTC m=+0.194536770 container attach 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 05:44:39 np0005479822 angry_grothendieck[75113]: --> passed data devices: 0 physical, 1 LVM
Oct 10 05:44:39 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new aea3dcf0-efc7-4ff7-81f8-9509a806fb04
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:40 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 10 05:44:40 np0005479822 lvm[75178]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:44:40 np0005479822 lvm[75178]: VG ceph_vg0 finished
Oct 10 05:44:41 np0005479822 angry_grothendieck[75113]: stderr: got monmap epoch 1
Oct 10 05:44:41 np0005479822 angry_grothendieck[75113]: --> Creating keyring file for osd.1
Oct 10 05:44:41 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 10 05:44:41 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 10 05:44:41 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid aea3dcf0-efc7-4ff7-81f8-9509a806fb04 --setuser ceph --setgroup ceph
Oct 10 05:44:44 np0005479822 angry_grothendieck[75113]: stderr: 2025-10-10T09:44:41.327+0000 7fd8dc9b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Oct 10 05:44:44 np0005479822 angry_grothendieck[75113]: stderr: 2025-10-10T09:44:41.597+0000 7fd8dc9b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 10 05:44:44 np0005479822 angry_grothendieck[75113]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 10 05:44:44 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:44 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 05:44:45 np0005479822 angry_grothendieck[75113]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 10 05:44:45 np0005479822 systemd[1]: libpod-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Deactivated successfully.
Oct 10 05:44:45 np0005479822 systemd[1]: libpod-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Consumed 2.493s CPU time.
Oct 10 05:44:45 np0005479822 podman[76078]: 2025-10-10 09:44:45.141503188 +0000 UTC m=+0.029658681 container died 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 05:44:45 np0005479822 systemd[1]: var-lib-containers-storage-overlay-30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9-merged.mount: Deactivated successfully.
Oct 10 05:44:45 np0005479822 podman[76078]: 2025-10-10 09:44:45.193377023 +0000 UTC m=+0.081532506 container remove 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 05:44:45 np0005479822 systemd[1]: libpod-conmon-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Deactivated successfully.
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.880831749 +0000 UTC m=+0.066503947 container create 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:45 np0005479822 systemd[1]: Started libpod-conmon-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope.
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.853840818 +0000 UTC m=+0.039513096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:45 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.975814263 +0000 UTC m=+0.161486491 container init 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.982443145 +0000 UTC m=+0.168115343 container start 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.986170661 +0000 UTC m=+0.171842879 container attach 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 05:44:45 np0005479822 affectionate_turing[76201]: 167 167
Oct 10 05:44:45 np0005479822 systemd[1]: libpod-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope: Deactivated successfully.
Oct 10 05:44:45 np0005479822 podman[76184]: 2025-10-10 09:44:45.991723905 +0000 UTC m=+0.177396123 container died 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:44:46 np0005479822 systemd[1]: var-lib-containers-storage-overlay-4ce150879014ac2a61092bd230545284892f0fd59247521b94fa09d02765a264-merged.mount: Deactivated successfully.
Oct 10 05:44:46 np0005479822 podman[76184]: 2025-10-10 09:44:46.046008044 +0000 UTC m=+0.231680262 container remove 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:46 np0005479822 systemd[1]: libpod-conmon-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope: Deactivated successfully.
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.286960025 +0000 UTC m=+0.060478030 container create 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:46 np0005479822 systemd[1]: Started libpod-conmon-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope.
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.256425443 +0000 UTC m=+0.029943528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:46 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:46 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:46 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:46 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:46 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.377837213 +0000 UTC m=+0.151355248 container init 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.384015463 +0000 UTC m=+0.157533498 container start 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.38811732 +0000 UTC m=+0.161635315 container attach 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]: {
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:    "1": [
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:        {
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "devices": [
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "/dev/loop3"
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            ],
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "lv_name": "ceph_lv0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "lv_size": "21470642176",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=aea3dcf0-efc7-4ff7-81f8-9509a806fb04,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "lv_uuid": "NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "name": "ceph_lv0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "tags": {
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.block_uuid": "NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.cephx_lockbox_secret": "",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.cluster_name": "ceph",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.crush_device_class": "",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.encrypted": "0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.osd_fsid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.osd_id": "1",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.osdspec_affinity": "default_drive_group",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.type": "block",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.vdo": "0",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:                "ceph.with_tpm": "0"
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            },
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "type": "block",
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:            "vg_name": "ceph_vg0"
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:        }
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]:    ]
Oct 10 05:44:46 np0005479822 confident_agnesi[76244]: }
Oct 10 05:44:46 np0005479822 systemd[1]: libpod-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope: Deactivated successfully.
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.710985616 +0000 UTC m=+0.484503621 container died 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 05:44:46 np0005479822 systemd[1]: var-lib-containers-storage-overlay-bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500-merged.mount: Deactivated successfully.
Oct 10 05:44:46 np0005479822 podman[76226]: 2025-10-10 09:44:46.765991363 +0000 UTC m=+0.539509398 container remove 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:46 np0005479822 systemd[1]: libpod-conmon-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope: Deactivated successfully.
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.559669444 +0000 UTC m=+0.070425688 container create 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 05:44:47 np0005479822 systemd[1]: Started libpod-conmon-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope.
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.527015116 +0000 UTC m=+0.037771360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:47 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.664226016 +0000 UTC m=+0.174982260 container init 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.675023576 +0000 UTC m=+0.185779820 container start 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.679505873 +0000 UTC m=+0.190262117 container attach 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 05:44:47 np0005479822 laughing_shannon[76372]: 167 167
Oct 10 05:44:47 np0005479822 systemd[1]: libpod-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope: Deactivated successfully.
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.685114938 +0000 UTC m=+0.195871172 container died 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:47 np0005479822 systemd[1]: var-lib-containers-storage-overlay-59eb3b4202372853172443441f1ebba4fc5c6a544a342723d8efd997751c2d1c-merged.mount: Deactivated successfully.
Oct 10 05:44:47 np0005479822 podman[76356]: 2025-10-10 09:44:47.741100571 +0000 UTC m=+0.251856815 container remove 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:47 np0005479822 systemd[1]: libpod-conmon-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope: Deactivated successfully.
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.119319543 +0000 UTC m=+0.068792046 container create 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 05:44:48 np0005479822 systemd[1]: Started libpod-conmon-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope.
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.090182307 +0000 UTC m=+0.039654890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:48 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:48 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:48 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:48 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:48 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:48 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.218086645 +0000 UTC m=+0.167559228 container init 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.234999184 +0000 UTC m=+0.184471697 container start 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.239111381 +0000 UTC m=+0.188583914 container attach 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 10 05:44:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]:                            [--no-systemd] [--no-tmpfs]
Oct 10 05:44:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 05:44:48 np0005479822 systemd[1]: libpod-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope: Deactivated successfully.
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.42105323 +0000 UTC m=+0.370525753 container died 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 05:44:48 np0005479822 systemd[1]: var-lib-containers-storage-overlay-4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264-merged.mount: Deactivated successfully.
Oct 10 05:44:48 np0005479822 podman[76403]: 2025-10-10 09:44:48.481875039 +0000 UTC m=+0.431347562 container remove 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 05:44:48 np0005479822 systemd[1]: libpod-conmon-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope: Deactivated successfully.
Oct 10 05:44:48 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:48 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:48 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:49 np0005479822 systemd[1]: Reloading.
Oct 10 05:44:49 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:44:49 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:44:49 np0005479822 systemd[1]: Starting Ceph osd.1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:44:49 np0005479822 podman[76581]: 2025-10-10 09:44:49.785862659 +0000 UTC m=+0.070417548 container create f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:49 np0005479822 podman[76581]: 2025-10-10 09:44:49.758780386 +0000 UTC m=+0.043335335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:49 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:49 np0005479822 podman[76581]: 2025-10-10 09:44:49.883142092 +0000 UTC m=+0.167696991 container init f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:49 np0005479822 podman[76581]: 2025-10-10 09:44:49.892784723 +0000 UTC m=+0.177339622 container start f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 05:44:49 np0005479822 podman[76581]: 2025-10-10 09:44:49.897145505 +0000 UTC m=+0.181700374 container attach f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 lvm[76678]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:44:50 np0005479822 lvm[76678]: VG ceph_vg0 finished
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 bash[76581]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 05:44:50 np0005479822 bash[76581]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 05:44:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:51 np0005479822 bash[76581]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:51 np0005479822 bash[76581]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 05:44:51 np0005479822 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 05:44:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:51 np0005479822 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 05:44:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 05:44:51 np0005479822 bash[76581]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 05:44:51 np0005479822 systemd[1]: libpod-f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4.scope: Deactivated successfully.
Oct 10 05:44:51 np0005479822 systemd[1]: libpod-f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4.scope: Consumed 1.598s CPU time.
Oct 10 05:44:51 np0005479822 podman[76581]: 2025-10-10 09:44:51.297385163 +0000 UTC m=+1.581940062 container died f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 05:44:51 np0005479822 systemd[1]: var-lib-containers-storage-overlay-32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312-merged.mount: Deactivated successfully.
Oct 10 05:44:51 np0005479822 podman[76581]: 2025-10-10 09:44:51.364793772 +0000 UTC m=+1.649348661 container remove f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:51 np0005479822 podman[76847]: 2025-10-10 09:44:51.713800597 +0000 UTC m=+0.073256952 container create 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 05:44:51 np0005479822 podman[76847]: 2025-10-10 09:44:51.685507853 +0000 UTC m=+0.044964258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:51 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:51 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:51 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:51 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:51 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:51 np0005479822 podman[76847]: 2025-10-10 09:44:51.810274709 +0000 UTC m=+0.169731124 container init 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 05:44:51 np0005479822 podman[76847]: 2025-10-10 09:44:51.82572835 +0000 UTC m=+0.185184705 container start 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:44:51 np0005479822 bash[76847]: 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5
Oct 10 05:44:51 np0005479822 systemd[1]: Started Ceph osd.1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: pidfile_write: ignore empty --pid-file
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:51 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.674171962 +0000 UTC m=+0.064945165 container create f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Oct 10 05:44:52 np0005479822 systemd[1]: Started libpod-conmon-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope.
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.648862426 +0000 UTC m=+0.039635649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:52 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:52 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.788050726 +0000 UTC m=+0.178823919 container init f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.799908815 +0000 UTC m=+0.190681988 container start f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.803564259 +0000 UTC m=+0.194337432 container attach f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:44:52 np0005479822 jolly_mahavira[76991]: 167 167
Oct 10 05:44:52 np0005479822 systemd[1]: libpod-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope: Deactivated successfully.
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.809968675 +0000 UTC m=+0.200741888 container died f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:44:52 np0005479822 systemd[1]: var-lib-containers-storage-overlay-4eb8e020131bd03da8b29b3cff42b851106bc0d9136f120f4decea947d0f2efb-merged.mount: Deactivated successfully.
Oct 10 05:44:52 np0005479822 podman[76975]: 2025-10-10 09:44:52.856757709 +0000 UTC m=+0.247530912 container remove f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 05:44:52 np0005479822 systemd[1]: libpod-conmon-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope: Deactivated successfully.
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:53 np0005479822 podman[77015]: 2025-10-10 09:44:53.037373784 +0000 UTC m=+0.051312722 container create 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 05:44:53 np0005479822 systemd[1]: Started libpod-conmon-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope.
Oct 10 05:44:53 np0005479822 podman[77015]: 2025-10-10 09:44:53.013538077 +0000 UTC m=+0.027477045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:53 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:53 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:53 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:53 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:53 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:53 np0005479822 podman[77015]: 2025-10-10 09:44:53.148227271 +0000 UTC m=+0.162166289 container init 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:53 np0005479822 podman[77015]: 2025-10-10 09:44:53.172340707 +0000 UTC m=+0.186279645 container start 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:44:53 np0005479822 podman[77015]: 2025-10-10 09:44:53.176553566 +0000 UTC m=+0.190492534 container attach 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: load: jerasure load: lrc 
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 05:44:53 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:53 np0005479822 lvm[77118]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 05:44:53 np0005479822 lvm[77118]: VG ceph_vg0 finished
Oct 10 05:44:54 np0005479822 hopeful_davinci[77034]: {}
Oct 10 05:44:54 np0005479822 systemd[1]: libpod-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Deactivated successfully.
Oct 10 05:44:54 np0005479822 podman[77015]: 2025-10-10 09:44:54.083657657 +0000 UTC m=+1.097596655 container died 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 05:44:54 np0005479822 systemd[1]: libpod-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Consumed 1.601s CPU time.
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:54 np0005479822 systemd[1]: var-lib-containers-storage-overlay-d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5-merged.mount: Deactivated successfully.
Oct 10 05:44:54 np0005479822 podman[77015]: 2025-10-10 09:44:54.144479735 +0000 UTC m=+1.158418713 container remove 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:54 np0005479822 systemd[1]: libpod-conmon-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Deactivated successfully.
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount shared_bdev_used = 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: RocksDB version: 7.9.2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Git sha 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DB SUMMARY
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DB Session ID:  SHLX46DNWVN5ILQBHSQ1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: CURRENT file:  CURRENT
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.error_if_exists: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.create_if_missing: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                     Options.env: 0x55b0958a7dc0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                Options.info_log: 0x55b0958ab7a0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.statistics: (nil)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.use_fsync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.db_log_dir: 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.write_buffer_manager: 0x55b0959a0a00
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.unordered_write: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.row_cache: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.wal_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.two_write_queues: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.wal_compression: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.atomic_flush: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_background_jobs: 4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_background_compactions: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_subcompactions: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.max_open_files: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Compression algorithms supported:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZSTD supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kXpressCompression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZlibCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6165b24c-a467-4199-b240-d2d6d1edbf3f
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494735052, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494735701, "job": 1, "event": "recovery_finished"}
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: freelist init
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: freelist _read_cfg
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs umount
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluefs mount shared_bdev_used = 4718592
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: RocksDB version: 7.9.2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Git sha 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DB SUMMARY
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DB Session ID:  SHLX46DNWVN5ILQBHSQ0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: CURRENT file:  CURRENT
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.error_if_exists: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.create_if_missing: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                     Options.env: 0x55b095a44310
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                Options.info_log: 0x55b0958ab920
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.statistics: (nil)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.use_fsync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.db_log_dir: 
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.write_buffer_manager: 0x55b0959a0a00
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.unordered_write: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.row_cache: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                              Options.wal_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.two_write_queues: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.wal_compression: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.atomic_flush: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_background_jobs: 4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_background_compactions: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_subcompactions: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.max_open_files: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Compression algorithms supported:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZSTD supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kXpressCompression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kZlibCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:54 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac7350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b094ac69b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6165b24c-a467-4199-b240-d2d6d1edbf3f
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494992902, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494997270, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495000695, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495003925, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495005749, "job": 1, "event": "recovery_finished"}
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b095a84000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: DB pointer 0x55b095a52000
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usag
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: _get_class not permitted to load lua
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: _get_class not permitted to load sdk
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 load_pgs
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 load_pgs opened 0 pgs
Oct 10 05:44:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:44:55.048+0000 7fc8b33b2740 -1 osd.1 0 log_to_monitors true
Oct 10 05:44:55 np0005479822 ceph-osd[76867]: osd.1 0 log_to_monitors true
Oct 10 05:44:55 np0005479822 podman[77698]: 2025-10-10 09:44:55.441206076 +0000 UTC m=+0.091300399 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Oct 10 05:44:55 np0005479822 podman[77698]: 2025-10-10 09:44:55.562678868 +0000 UTC m=+0.212773111 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 05:44:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 05:44:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.411609302 +0000 UTC m=+0.051778385 container create 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 05:44:56 np0005479822 systemd[1]: Started libpod-conmon-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope.
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.390668268 +0000 UTC m=+0.030837351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:56 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.507360555 +0000 UTC m=+0.147530158 container init 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.516490412 +0000 UTC m=+0.156659525 container start 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.520893277 +0000 UTC m=+0.161062370 container attach 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 05:44:56 np0005479822 zen_benz[77854]: 167 167
Oct 10 05:44:56 np0005479822 systemd[1]: libpod-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope: Deactivated successfully.
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.525848095 +0000 UTC m=+0.166017208 container died 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 05:44:56 np0005479822 systemd[1]: var-lib-containers-storage-overlay-75616985f63a1846b7bb76c201a96b98b21081db7cc16e09605557691c1d7c39-merged.mount: Deactivated successfully.
Oct 10 05:44:56 np0005479822 podman[77837]: 2025-10-10 09:44:56.582406803 +0000 UTC m=+0.222575906 container remove 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:44:56 np0005479822 systemd[1]: libpod-conmon-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope: Deactivated successfully.
Oct 10 05:44:56 np0005479822 podman[77876]: 2025-10-10 09:44:56.750574075 +0000 UTC m=+0.052411201 container create f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 05:44:56 np0005479822 systemd[1]: Started libpod-conmon-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope.
Oct 10 05:44:56 np0005479822 podman[77876]: 2025-10-10 09:44:56.729112498 +0000 UTC m=+0.030949644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:44:56 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:44:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:44:56 np0005479822 podman[77876]: 2025-10-10 09:44:56.846608036 +0000 UTC m=+0.148445262 container init f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:44:56 np0005479822 podman[77876]: 2025-10-10 09:44:56.861386219 +0000 UTC m=+0.163223495 container start f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 05:44:56 np0005479822 podman[77876]: 2025-10-10 09:44:56.866179414 +0000 UTC m=+0.168016580 container attach f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 done with init, starting boot process
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 start_boot
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 05:44:57 np0005479822 ceph-osd[76867]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]: [
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:    {
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "available": false,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "being_replaced": false,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "ceph_device_lvm": false,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "lsm_data": {},
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "lvs": [],
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "path": "/dev/sr0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "rejected_reasons": [
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "Insufficient space (<5GB)",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "Has a FileSystem"
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        ],
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        "sys_api": {
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "actuators": null,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "device_nodes": [
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:                "sr0"
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            ],
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "devname": "sr0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "human_readable_size": "482.00 KB",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "id_bus": "ata",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "model": "QEMU DVD-ROM",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "nr_requests": "2",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "parent": "/dev/sr0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "partitions": {},
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "path": "/dev/sr0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "removable": "1",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "rev": "2.5+",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "ro": "0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "rotational": "0",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "sas_address": "",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "sas_device_handle": "",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "scheduler_mode": "mq-deadline",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "sectors": 0,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "sectorsize": "2048",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "size": 493568.0,
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "support_discard": "2048",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "type": "disk",
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:            "vendor": "QEMU"
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:        }
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]:    }
Oct 10 05:44:57 np0005479822 quirky_euclid[77892]: ]
Oct 10 05:44:57 np0005479822 systemd[1]: libpod-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope: Deactivated successfully.
Oct 10 05:44:57 np0005479822 podman[78826]: 2025-10-10 09:44:57.846305012 +0000 UTC m=+0.045591684 container died f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:44:57 np0005479822 systemd[1]: var-lib-containers-storage-overlay-30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1-merged.mount: Deactivated successfully.
Oct 10 05:44:57 np0005479822 podman[78826]: 2025-10-10 09:44:57.933706319 +0000 UTC m=+0.132992921 container remove f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:44:57 np0005479822 systemd[1]: libpod-conmon-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope: Deactivated successfully.
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.800 iops: 2508.856 elapsed_sec: 1.196
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: log_channel(cluster) log [WRN] : OSD bench result of 2508.856277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 05:45:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:45:00.906+0000 7fc8af335640 -1 osd.1 0 waiting for initial osdmap
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 0 waiting for initial osdmap
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 check_osdmap_features require_osd_release unknown -> squid
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 05:45:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:45:00.941+0000 7fc8aa95d640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 10 05:45:00 np0005479822 ceph-osd[76867]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 10 05:45:01 np0005479822 ceph-osd[76867]: osd.1 13 state: booting -> active
Oct 10 05:45:01 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:02 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.457141197 +0000 UTC m=+0.048587651 container create 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:45:20 np0005479822 systemd[1]: Started libpod-conmon-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope.
Oct 10 05:45:20 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.43639818 +0000 UTC m=+0.027844634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.546083505 +0000 UTC m=+0.137530029 container init 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.55554289 +0000 UTC m=+0.146989334 container start 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.559092333 +0000 UTC m=+0.150538867 container attach 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 05:45:20 np0005479822 relaxed_visvesvaraya[78949]: 167 167
Oct 10 05:45:20 np0005479822 systemd[1]: libpod-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope: Deactivated successfully.
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.564641337 +0000 UTC m=+0.156087781 container died 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 05:45:20 np0005479822 systemd[1]: var-lib-containers-storage-overlay-70e6ed51daa7c150156491490c6ed0d765fcddc8c21e20769b0dd5ed3be34660-merged.mount: Deactivated successfully.
Oct 10 05:45:20 np0005479822 podman[78933]: 2025-10-10 09:45:20.613917015 +0000 UTC m=+0.205363509 container remove 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 05:45:20 np0005479822 systemd[1]: libpod-conmon-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope: Deactivated successfully.
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.710690426 +0000 UTC m=+0.062554185 container create d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 05:45:20 np0005479822 systemd[1]: Started libpod-conmon-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope.
Oct 10 05:45:20 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.678794118 +0000 UTC m=+0.030657977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:45:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.795967398 +0000 UTC m=+0.147831197 container init d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.810503135 +0000 UTC m=+0.162366904 container start d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.813813841 +0000 UTC m=+0.165677640 container attach d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 05:45:20 np0005479822 systemd[1]: libpod-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope: Deactivated successfully.
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.92787619 +0000 UTC m=+0.279739979 container died d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:45:20 np0005479822 systemd[1]: var-lib-containers-storage-overlay-7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf-merged.mount: Deactivated successfully.
Oct 10 05:45:20 np0005479822 podman[78966]: 2025-10-10 09:45:20.981798169 +0000 UTC m=+0.333661958 container remove d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 05:45:20 np0005479822 systemd[1]: libpod-conmon-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope: Deactivated successfully.
Oct 10 05:45:21 np0005479822 systemd[1]: Reloading.
Oct 10 05:45:21 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:45:21 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:45:21 np0005479822 systemd[1]: Reloading.
Oct 10 05:45:21 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:45:21 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:45:21 np0005479822 systemd[1]: Starting Ceph mon.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:45:21 np0005479822 podman[79147]: 2025-10-10 09:45:21.933073718 +0000 UTC m=+0.067701077 container create ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:45:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:21 np0005479822 podman[79147]: 2025-10-10 09:45:21.904061526 +0000 UTC m=+0.038688935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:45:22 np0005479822 podman[79147]: 2025-10-10 09:45:22.004597424 +0000 UTC m=+0.139224813 container init ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 05:45:22 np0005479822 podman[79147]: 2025-10-10 09:45:22.018523736 +0000 UTC m=+0.153151085 container start ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 05:45:22 np0005479822 bash[79147]: ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b
Oct 10 05:45:22 np0005479822 systemd[1]: Started Ceph mon.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: pidfile_write: ignore empty --pid-file
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: load: jerasure load: lrc 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: RocksDB version: 7.9.2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Git sha 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: DB SUMMARY
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: DB Session ID:  7GCI9JJE38KWUSAORMRB
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: CURRENT file:  CURRENT
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                         Options.error_if_exists: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.create_if_missing: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                                     Options.env: 0x5625d2d9bc20
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                                Options.info_log: 0x5625d3e3fa20
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                              Options.statistics: (nil)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                               Options.use_fsync: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                              Options.db_log_dir: 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                                 Options.wal_dir: 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                    Options.write_buffer_manager: 0x5625d3e43900
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.unordered_write: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                               Options.row_cache: None
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                              Options.wal_filter: None
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.two_write_queues: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.wal_compression: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.atomic_flush: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.max_background_jobs: 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.max_background_compactions: -1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.max_subcompactions: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                          Options.max_open_files: -1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Compression algorithms supported:
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kZSTD supported: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kXpressCompression supported: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kBZip2Compression supported: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kLZ4Compression supported: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kZlibCompression supported: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kLZ4HCCompression supported: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: #011kSnappyCompression supported: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:           Options.merge_operator: 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:        Options.compaction_filter: None
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5625d3e3e5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5625d3e63350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.compression: NoCompression
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.num_levels: 7
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                           Options.bloom_locality: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                               Options.ttl: 2592000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                       Options.enable_blob_files: false
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                           Options.min_blob_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 72205880-e92d-427e-a84d-d60d79c79ead
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522094778, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522097179, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522097446, "job": 1, "event": "recovery_finished"}
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5625d3e64e00
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: DB pointer 0x5625d3f6e000
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(???) e0 preinit fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-10-10T09:43:15:731413+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 3314933000852226048, adjusting msgr requires
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Deploying daemon crash.compute-1 on compute-1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Deploying daemon osd.0 on compute-0
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Deploying daemon osd.1 on compute-1
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: OSD bench result of 8693.274022 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206] boot
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: OSD bench result of 2508.856277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396] boot
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Deploying daemon mon.compute-2 on compute-2
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: Cluster is now healthy
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 05:45:22 np0005479822 ceph-mon[79167]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 10 05:45:28 np0005479822 ceph-mon[79167]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct 10 05:45:28 np0005479822 ceph-mon[79167]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct 10 05:45:28 np0005479822 ceph-mon[79167]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 10 05:45:28 np0005479822 ceph-mon[79167]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: Deploying daemon mon.compute-1 on compute-1
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-0 calling monitor election
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-2 calling monitor election
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: Deploying daemon mgr.compute-2.gkrssp on compute-2
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,os=Linux}
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-0 calling monitor election
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-2 calling monitor election
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-1 calling monitor election
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:31 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 05:45:31 np0005479822 podman[79298]: 2025-10-10 09:45:31.903852236 +0000 UTC m=+0.046855956 container create f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:45:31 np0005479822 systemd[1]: Started libpod-conmon-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope.
Oct 10 05:45:31 np0005479822 podman[79298]: 2025-10-10 09:45:31.882732699 +0000 UTC m=+0.025736459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:45:31 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:45:31 np0005479822 podman[79298]: 2025-10-10 09:45:31.996636043 +0000 UTC m=+0.139639783 container init f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:45:32 np0005479822 podman[79298]: 2025-10-10 09:45:32.005108014 +0000 UTC m=+0.148111744 container start f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:45:32 np0005479822 podman[79298]: 2025-10-10 09:45:32.008990834 +0000 UTC m=+0.151994574 container attach f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 05:45:32 np0005479822 friendly_khorana[79314]: 167 167
Oct 10 05:45:32 np0005479822 systemd[1]: libpod-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope: Deactivated successfully.
Oct 10 05:45:32 np0005479822 podman[79298]: 2025-10-10 09:45:32.011247003 +0000 UTC m=+0.154250763 container died f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 05:45:32 np0005479822 systemd[1]: var-lib-containers-storage-overlay-b8c9af3d15f3bacf7cda741524897af99997b21e09a238c34b6eaf4bad8d5f76-merged.mount: Deactivated successfully.
Oct 10 05:45:32 np0005479822 podman[79298]: 2025-10-10 09:45:32.049841664 +0000 UTC m=+0.192845394 container remove f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:45:32 np0005479822 systemd[1]: libpod-conmon-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope: Deactivated successfully.
Oct 10 05:45:32 np0005479822 systemd[1]: Reloading.
Oct 10 05:45:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e15 _set_new_cache_sizes cache_size:1019933712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:32 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:45:32 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:45:32 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 05:45:32 np0005479822 ceph-mon[79167]: Deploying daemon mgr.compute-1.rfugxc on compute-1
Oct 10 05:45:32 np0005479822 systemd[1]: Reloading.
Oct 10 05:45:32 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:45:32 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:45:32 np0005479822 systemd[1]: Starting Ceph mgr.compute-1.rfugxc for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:45:33 np0005479822 podman[79457]: 2025-10-10 09:45:33.072855614 +0000 UTC m=+0.063706053 container create 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 05:45:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/lib/ceph/mgr/ceph-compute-1.rfugxc supports timestamps until 2038 (0x7fffffff)
Oct 10 05:45:33 np0005479822 podman[79457]: 2025-10-10 09:45:33.04031747 +0000 UTC m=+0.031167919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:45:33 np0005479822 podman[79457]: 2025-10-10 09:45:33.145512159 +0000 UTC m=+0.136362658 container init 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 05:45:33 np0005479822 podman[79457]: 2025-10-10 09:45:33.157967082 +0000 UTC m=+0.148817521 container start 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 05:45:33 np0005479822 bash[79457]: 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b
Oct 10 05:45:33 np0005479822 systemd[1]: Started Ceph mgr.compute-1.rfugxc for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 05:45:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 05:45:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:33.343+0000 7f48edd15140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:45:33 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 05:45:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:33.419+0000 7f48edd15140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:45:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 05:45:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:34.243+0000 7f48edd15140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:45:34 np0005479822 ceph-mon[79167]: Deploying daemon crash.compute-2 on compute-2
Oct 10 05:45:34 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:34 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct 10 05:45:34 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:45:34 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 05:45:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:34.860+0000 7f48edd15140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:  from numpy import show_config as show_numpy_config
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.024+0000 7f48edd15140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.101+0000 7f48edd15140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 05:45:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.236+0000 7f48edd15140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:35 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 05:45:35 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.251+0000 7f48edd15140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.468+0000 7f48edd15140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.540+0000 7f48edd15140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.606+0000 7f48edd15140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.681+0000 7f48edd15140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.749+0000 7f48edd15140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:45:36 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 05:45:36 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.085+0000 7f48edd15140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e20 _set_new_cache_sizes cache_size:1020053257 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.175+0000 7f48edd15140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/3277074974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]': finished
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.582+0000 7f48edd15140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:45:37 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 05:45:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.114+0000 7f48edd15140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.179+0000 7f48edd15140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.258+0000 7f48edd15140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.398+0000 7f48edd15140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:38 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.466+0000 7f48edd15140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.609+0000 7f48edd15140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 05:45:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.813+0000 7f48edd15140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:45:38 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct 10 05:45:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:39 np0005479822 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:45:39 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 05:45:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:39.089+0000 7f48edd15140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:45:39 np0005479822 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:45:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:39.168+0000 7f48edd15140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:45:39 np0005479822 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5647ad6bad00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:39 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct 10 05:45:39 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=10.466954231s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active pruub 55.323619843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:39 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=10.466954231s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown pruub 55.323619843s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:39 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:45:40 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 10 05:45:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 10 05:45:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Oct 10 05:45:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:45:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 10 05:45:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 10 05:45:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: Deploying daemon osd.2 on compute-2
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 10 05:45:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 10 05:45:43 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 10 05:45:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 10 05:45:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 10 05:45:44 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 05:45:44 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:45:44 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:45:44 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 10 05:45:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 10 05:45:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 10 05:45:46 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 05:45:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 10 05:45:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 10 05:45:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 10 05:45:47 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 05:45:47 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:47 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:47 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:47 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 10 05:45:47 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 10 05:45:48 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 05:45:48 np0005479822 ceph-mon[79167]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 05:45:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 10 05:45:48 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 10 05:45:48 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 10 05:45:49 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 05:45:49 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:49 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:49 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 10 05:45:49 np0005479822 podman[79655]: 2025-10-10 09:45:49.906430365 +0000 UTC m=+0.104169537 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:45:49 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 10 05:45:50 np0005479822 podman[79655]: 2025-10-10 09:45:50.024834908 +0000 UTC m=+0.222574030 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:45:50 np0005479822 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 05:45:50 np0005479822 ceph-mon[79167]: Cluster is now healthy
Oct 10 05:45:50 np0005479822 ceph-mon[79167]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 05:45:50 np0005479822 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 05:45:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 10 05:45:50 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Oct 10 05:45:50 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:45:51 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.118583679s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.874893188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124971390s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881309509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125008583s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881362915s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.118496895s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.874893188s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125008583s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124971390s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124822617s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881538391s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124822617s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125357628s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882019043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125357628s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124114037s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881301880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124033928s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881301880s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124660492s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124625206s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123898506s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881355286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123868942s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881355286s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123972893s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881561279s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123950005s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881561279s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123935699s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881607056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123908043s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881607056s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124008179s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881774902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124008179s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123831749s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881782532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123831749s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123937607s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881927490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123937607s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123929024s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882003784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123929024s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123857498s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881988525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123835564s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881988525s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123817444s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881996155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123817444s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123723030s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123723030s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123621941s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882072449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123621941s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123581886s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882110596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123581886s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123530388s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882301331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123305321s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882102966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123510361s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882301331s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123305321s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123338699s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882225037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123338699s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Oct 10 05:45:51 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 05:45:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.10( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1f( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.11( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.15( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.16( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.9( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.2( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.7( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.f( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1c( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1b( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.18( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Oct 10 05:45:52 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2975567301' entity='client.admin' 
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:53 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 10 05:45:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: Saving service ingress.rgw.default spec with placement count:2
Oct 10 05:45:54 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:54 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 10 05:45:54 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 10 05:45:55 np0005479822 ceph-mon[79167]: OSD bench result of 9119.333889 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 05:45:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 10 05:45:55 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 10 05:45:55 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354] boot
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: Saving service node-exporter spec with placement *
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: Saving service grafana spec with placement compute-0;count:1
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: Saving service prometheus spec with placement compute-0;count:1
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: Saving service alertmanager spec with placement compute-0;count:1
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 10 05:45:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Oct 10 05:45:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Oct 10 05:45:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:45:57 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:57 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2898111592' entity='client.admin' 
Oct 10 05:45:57 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 10 05:45:57 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867969990s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867820263s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867899418s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867742062s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867722988s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867669582s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.866700649s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.865193844s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.865051746s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864959717s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864971638s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864916801s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864692211s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864659786s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864485741s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864300728s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864441872s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864252090s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863999844s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863965988s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863377571s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863338947s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863769054s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863054276s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.862929821s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863005161s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.862899303s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863616943s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:45:58 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:58 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:58 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1237849469' entity='client.admin' 
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 10 05:45:58 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: Reconfiguring mgr.compute-0.xkdepb (monmap changed)...
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 05:45:59 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/3410162506' entity='client.admin' 
Oct 10 05:45:59 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 10 05:45:59 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 10 05:46:00 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:00 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:00 np0005479822 ceph-mon[79167]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 10 05:46:00 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 05:46:00 np0005479822 ceph-mon[79167]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 10 05:46:00 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 10 05:46:00 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: Reconfiguring osd.0 (monmap changed)...
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: Reconfiguring daemon osd.0 on compute-0
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2517476288' entity='client.admin' 
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:01 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 05:46:01 np0005479822 python3[80338]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.718667772 +0000 UTC m=+0.041957020 container create 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 05:46:01 np0005479822 systemd[1]: Started libpod-conmon-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope.
Oct 10 05:46:01 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.700850449 +0000 UTC m=+0.024139717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.819761675 +0000 UTC m=+0.143050993 container init 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.828889623 +0000 UTC m=+0.152178891 container start 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.834993209 +0000 UTC m=+0.158282517 container attach 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:46:01 np0005479822 zealous_turing[80415]: 167 167
Oct 10 05:46:01 np0005479822 systemd[1]: libpod-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope: Deactivated successfully.
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.837345313 +0000 UTC m=+0.160634541 container died 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:46:01 np0005479822 systemd[1]: var-lib-containers-storage-overlay-8e2853bb9aa7abe75bd64a412927eac2cb74ab8cda85c7648de019d843c890e1-merged.mount: Deactivated successfully.
Oct 10 05:46:01 np0005479822 podman[80399]: 2025-10-10 09:46:01.880601786 +0000 UTC m=+0.203891024 container remove 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:46:01 np0005479822 systemd[1]: libpod-conmon-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope: Deactivated successfully.
Oct 10 05:46:01 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 10 05:46:01 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: Reconfiguring osd.1 (monmap changed)...
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 05:46:02 np0005479822 ceph-mon[79167]: Reconfiguring daemon osd.1 on compute-1
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.562953482 +0000 UTC m=+0.072703784 container create 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 05:46:02 np0005479822 systemd[1]: Started libpod-conmon-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope.
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.532623969 +0000 UTC m=+0.042374311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:46:02 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.667792236 +0000 UTC m=+0.177542538 container init 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.678220219 +0000 UTC m=+0.187970521 container start 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.682407432 +0000 UTC m=+0.192157744 container attach 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 05:46:02 np0005479822 ecstatic_borg[80514]: 167 167
Oct 10 05:46:02 np0005479822 systemd[1]: libpod-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope: Deactivated successfully.
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.686439992 +0000 UTC m=+0.196190254 container died 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 05:46:02 np0005479822 systemd[1]: var-lib-containers-storage-overlay-98989988f73f190002547e2c425aac850cb1e9b76b25cadb003e493c30f82224-merged.mount: Deactivated successfully.
Oct 10 05:46:02 np0005479822 podman[80498]: 2025-10-10 09:46:02.729565282 +0000 UTC m=+0.239315554 container remove 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 05:46:02 np0005479822 systemd[1]: libpod-conmon-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope: Deactivated successfully.
Oct 10 05:46:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 10 05:46:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.admin' 
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 05:46:03 np0005479822 ceph-mon[79167]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.472317026 +0000 UTC m=+0.067934804 container create b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 05:46:03 np0005479822 systemd[1]: Started libpod-conmon-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope.
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.443137025 +0000 UTC m=+0.038754853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:46:03 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.560747066 +0000 UTC m=+0.156364884 container init b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.57451582 +0000 UTC m=+0.170133608 container start b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.578419275 +0000 UTC m=+0.174037063 container attach b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 05:46:03 np0005479822 zen_borg[80623]: 167 167
Oct 10 05:46:03 np0005479822 systemd[1]: libpod-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope: Deactivated successfully.
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.584234933 +0000 UTC m=+0.179852721 container died b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:46:03 np0005479822 systemd[1]: var-lib-containers-storage-overlay-6ef8610fdf4cb2c21bba4dddaa82f2acff09b3f99f4b1107782956b12e15881e-merged.mount: Deactivated successfully.
Oct 10 05:46:03 np0005479822 podman[80607]: 2025-10-10 09:46:03.639147873 +0000 UTC m=+0.234765651 container remove b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:46:03 np0005479822 systemd[1]: libpod-conmon-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope: Deactivated successfully.
Oct 10 05:46:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 10 05:46:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/699590867' entity='client.admin' 
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:04 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 05:46:04 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 10 05:46:05 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 10 05:46:05 np0005479822 ceph-mon[79167]: Reconfiguring mgr.compute-2.gkrssp (monmap changed)...
Oct 10 05:46:05 np0005479822 ceph-mon[79167]: Reconfiguring daemon mgr.compute-2.gkrssp on compute-2
Oct 10 05:46:05 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:05 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:05 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 05:46:05 np0005479822 podman[80762]: 2025-10-10 09:46:05.767986418 +0000 UTC m=+0.081315208 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 05:46:05 np0005479822 podman[80762]: 2025-10-10 09:46:05.900804071 +0000 UTC m=+0.214132811 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 05:46:05 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 10 05:46:05 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 10 05:46:06 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:06 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:06 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 05:46:06 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:06 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:06 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Oct 10 05:46:06 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Oct 10 05:46:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:07 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:46:07 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:07 np0005479822 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:46:07 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 05:46:07 np0005479822 systemd[1]: session-33.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-33.scope: Consumed 1min 12.729s CPU time.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 33 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd[1]: session-29.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-23.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-24.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-27.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-26.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd[1]: session-28.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-32.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-31.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-21.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd[1]: session-25.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd[1]: session-30.scope: Deactivated successfully.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 31 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 32 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 05:46:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 21 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 33.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 29.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 23.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 24.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 27.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 26.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 28.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 32.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 31.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 21.
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 25.
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 05:46:07 np0005479822 systemd-logind[789]: Removed session 30.
Oct 10 05:46:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:07.776+0000 7f5eeb717140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 05:46:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:07.854+0000 7f5eeb717140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:07 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 05:46:07 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 10 05:46:07 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 10 05:46:08 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 05:46:08 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 05:46:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:08.636+0000 7f5eeb717140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:08 np0005479822 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:08 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 05:46:08 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 10 05:46:08 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.233+0000 7f5eeb717140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:  from numpy import show_config as show_numpy_config
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.394+0000 7f5eeb717140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.461+0000 7f5eeb717140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 05:46:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.597+0000 7f5eeb717140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 05:46:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 05:46:09 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 10 05:46:09 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 05:46:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.565+0000 7f5eeb717140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 05:46:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.771+0000 7f5eeb717140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 05:46:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.848+0000 7f5eeb717140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 05:46:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.913+0000 7f5eeb717140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 05:46:10 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 10 05:46:10 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 10 05:46:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.001+0000 7f5eeb717140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 05:46:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.072+0000 7f5eeb717140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 05:46:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.388+0000 7f5eeb717140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 05:46:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.479+0000 7f5eeb717140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 05:46:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.924+0000 7f5eeb717140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 05:46:12 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 10 05:46:12 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 10 05:46:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.458+0000 7f5eeb717140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.525+0000 7f5eeb717140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.601+0000 7f5eeb717140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.745+0000 7f5eeb717140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.821+0000 7f5eeb717140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 05:46:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.978+0000 7f5eeb717140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 05:46:13 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 10 05:46:13 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 10 05:46:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.189+0000 7f5eeb717140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 05:46:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.455+0000 7f5eeb717140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 05:46:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.521+0000 7f5eeb717140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x55dd163c3860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 05:46:13 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 05:46:13 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 10 05:46:14 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 10 05:46:14 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 10 05:46:14 np0005479822 systemd-logind[789]: New session 34 of user ceph-admin.
Oct 10 05:46:14 np0005479822 systemd[1]: Started Session 34 of User ceph-admin.
Oct 10 05:46:14 np0005479822 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 05:46:14 np0005479822 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 05:46:14 np0005479822 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 05:46:14 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 05:46:14 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 05:46:15 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 10 05:46:15 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 10 05:46:15 np0005479822 podman[81020]: 2025-10-10 09:46:15.320303473 +0000 UTC m=+0.074498453 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 05:46:15 np0005479822 podman[81020]: 2025-10-10 09:46:15.442729344 +0000 UTC m=+0.196924284 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 05:46:15 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 10 05:46:16 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Bus STARTING
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Serving on https://192.168.122.100:7150
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Client ('192.168.122.100', 44336) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:16 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct 10 05:46:17 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Serving on http://192.168.122.100:8765
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Bus STARTED
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:17 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:46:18 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 10 05:46:18 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 05:46:18 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:19 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 10 05:46:19 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:19 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:20 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 10 05:46:20 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  5: 'ceph'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  6: '--setgroup'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  7: 'ceph'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:20 np0005479822 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:20 np0005479822 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Oct 10 05:46:20 np0005479822 systemd[1]: session-34.scope: Deactivated successfully.
Oct 10 05:46:20 np0005479822 systemd[1]: session-34.scope: Consumed 6.311s CPU time.
Oct 10 05:46:20 np0005479822 systemd-logind[789]: Removed session 34.
Oct 10 05:46:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 05:46:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 05:46:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:20.936+0000 7f43bcc25140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:20 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 05:46:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:21.013+0000 7f43bcc25140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:21 np0005479822 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:21 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 05:46:21 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 10 05:46:21 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 10 05:46:21 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 05:46:21 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 05:46:21 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 05:46:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:21.861+0000 7f43bcc25140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:21 np0005479822 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:21 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 05:46:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:22 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Oct 10 05:46:22 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.460+0000 7f43bcc25140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:  from numpy import show_config as show_numpy_config
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.612+0000 7f43bcc25140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.678+0000 7f43bcc25140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 05:46:22 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 05:46:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.803+0000 7f43bcc25140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:22 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 05:46:23 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 10 05:46:23 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 05:46:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:23.726+0000 7f43bcc25140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 05:46:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:23.935+0000 7f43bcc25140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:23 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.012+0000 7f43bcc25140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.077+0000 7f43bcc25140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.150+0000 7f43bcc25140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.215+0000 7f43bcc25140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 05:46:24 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 10 05:46:24 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.528+0000 7f43bcc25140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 05:46:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.622+0000 7f43bcc25140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 05:46:24 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.047+0000 7f43bcc25140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 05:46:25 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Oct 10 05:46:25 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.561+0000 7f43bcc25140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.630+0000 7f43bcc25140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.715+0000 7f43bcc25140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.860+0000 7f43bcc25140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 05:46:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.931+0000 7f43bcc25140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:25 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.084+0000 7f43bcc25140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 05:46:26 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 10 05:46:26 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.294+0000 7f43bcc25140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.539+0000 7f43bcc25140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.602+0000 7f43bcc25140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5616dd955860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.861+0000 7f2ea663d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 05:46:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.940+0000 7f2ea663d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:46:26 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 05:46:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 10 05:46:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:27 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 10 05:46:27 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 10 05:46:27 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 05:46:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:27.675+0000 7f2ea663d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:27 np0005479822 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:46:27 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 05:46:27 np0005479822 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 05:46:27 np0005479822 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 05:46:28 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 10 05:46:28 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.260+0000 7f2ea663d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:  from numpy import show_config as show_numpy_config
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.433+0000 7f2ea663d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.497+0000 7f2ea663d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 05:46:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.625+0000 7f2ea663d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 05:46:28 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 05:46:29 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 10 05:46:29 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 05:46:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.566+0000 7f2ea663d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 05:46:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.769+0000 7f2ea663d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 05:46:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.849+0000 7f2ea663d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 05:46:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.913+0000 7f2ea663d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 05:46:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.988+0000 7f2ea663d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:46:29 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 05:46:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.060+0000 7f2ea663d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 05:46:30 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 10 05:46:30 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 10 05:46:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.385+0000 7f2ea663d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 05:46:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.471+0000 7f2ea663d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 05:46:30 np0005479822 systemd[1]: Stopping User Manager for UID 42477...
Oct 10 05:46:30 np0005479822 systemd[71841]: Activating special unit Exit the Session...
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped target Main User Target.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped target Basic System.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped target Paths.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped target Sockets.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped target Timers.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 05:46:30 np0005479822 systemd[71841]: Closed D-Bus User Message Bus Socket.
Oct 10 05:46:30 np0005479822 systemd[71841]: Stopped Create User's Volatile Files and Directories.
Oct 10 05:46:30 np0005479822 systemd[71841]: Removed slice User Application Slice.
Oct 10 05:46:30 np0005479822 systemd[71841]: Reached target Shutdown.
Oct 10 05:46:30 np0005479822 systemd[71841]: Finished Exit the Session.
Oct 10 05:46:30 np0005479822 systemd[71841]: Reached target Exit the Session.
Oct 10 05:46:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.873+0000 7f2ea663d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:46:30 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 05:46:30 np0005479822 systemd[1]: user@42477.service: Deactivated successfully.
Oct 10 05:46:30 np0005479822 systemd[1]: Stopped User Manager for UID 42477.
Oct 10 05:46:30 np0005479822 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 10 05:46:30 np0005479822 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 10 05:46:30 np0005479822 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 10 05:46:30 np0005479822 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 10 05:46:30 np0005479822 systemd[1]: Removed slice User Slice of UID 42477.
Oct 10 05:46:30 np0005479822 systemd[1]: user-42477.slice: Consumed 1min 20.821s CPU time.
Oct 10 05:46:31 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 10 05:46:31 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 10 05:46:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.440+0000 7f2ea663d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 05:46:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.553+0000 7f2ea663d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 05:46:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.633+0000 7f2ea663d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 05:46:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.783+0000 7f2ea663d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 05:46:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.856+0000 7f2ea663d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:46:31 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 05:46:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.011+0000 7f2ea663d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 05:46:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.228+0000 7f2ea663d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 05:46:32 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 10 05:46:32 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 10 05:46:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.497+0000 7f2ea663d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 05:46:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.563+0000 7f2ea663d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x562510b79860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 05:46:32 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 05:46:33 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 10 05:46:33 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 05:46:33 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 05:46:33 np0005479822 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 05:46:33 np0005479822 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 05:46:33 np0005479822 systemd-logind[789]: New session 35 of user ceph-admin.
Oct 10 05:46:33 np0005479822 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 05:46:34 np0005479822 systemd[1]: Starting User Manager for UID 42477...
Oct 10 05:46:34 np0005479822 systemd[82226]: Queued start job for default target Main User Target.
Oct 10 05:46:34 np0005479822 systemd[82226]: Created slice User Application Slice.
Oct 10 05:46:34 np0005479822 systemd[82226]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 05:46:34 np0005479822 systemd[82226]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 05:46:34 np0005479822 systemd[82226]: Reached target Paths.
Oct 10 05:46:34 np0005479822 systemd[82226]: Reached target Timers.
Oct 10 05:46:34 np0005479822 systemd[82226]: Starting D-Bus User Message Bus Socket...
Oct 10 05:46:34 np0005479822 systemd[82226]: Starting Create User's Volatile Files and Directories...
Oct 10 05:46:34 np0005479822 systemd[82226]: Listening on D-Bus User Message Bus Socket.
Oct 10 05:46:34 np0005479822 systemd[82226]: Reached target Sockets.
Oct 10 05:46:34 np0005479822 systemd[82226]: Finished Create User's Volatile Files and Directories.
Oct 10 05:46:34 np0005479822 systemd[82226]: Reached target Basic System.
Oct 10 05:46:34 np0005479822 systemd[82226]: Reached target Main User Target.
Oct 10 05:46:34 np0005479822 systemd[82226]: Startup finished in 158ms.
Oct 10 05:46:34 np0005479822 systemd[1]: Started User Manager for UID 42477.
Oct 10 05:46:34 np0005479822 systemd[1]: Started Session 35 of User ceph-admin.
Oct 10 05:46:34 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 10 05:46:34 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e2 new map
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e2 print_map#012e2#012btime 2025-10-10T09:46:34:511425+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:46:34.511367+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 05:46:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 podman[82365]: 2025-10-10 09:46:35.195998557 +0000 UTC m=+0.092998555 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 05:46:35 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 10 05:46:35 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 10 05:46:35 np0005479822 podman[82365]: 2025-10-10 09:46:35.320882895 +0000 UTC m=+0.217882843 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:34] ENGINE Bus STARTING
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:34] ENGINE Serving on http://192.168.122.100:8765
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Serving on https://192.168.122.100:7150
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Bus STARTED
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Client ('192.168.122.100', 60804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:36 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 10 05:46:36 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 10 05:46:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 10 05:46:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 10 05:46:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 10 05:46:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:38 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 10 05:46:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:39 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct 10 05:46:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 05:46:40 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 05:46:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 10 05:46:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 10 05:46:41 np0005479822 ceph-mon[79167]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 05:46:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:42 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:42 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:42 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:43 np0005479822 systemd[1]: Reloading.
Oct 10 05:46:43 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:46:43 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:46:43 np0005479822 systemd[1]: Reloading.
Oct 10 05:46:43 np0005479822 ceph-mon[79167]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 10 05:46:44 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:46:44 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:46:44 np0005479822 systemd[1]: Starting Ceph node-exporter.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:46:44 np0005479822 bash[83706]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 10 05:46:44 np0005479822 bash[83706]: Getting image source signatures
Oct 10 05:46:44 np0005479822 bash[83706]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 10 05:46:44 np0005479822 bash[83706]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 10 05:46:44 np0005479822 bash[83706]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 10 05:46:45 np0005479822 bash[83706]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 10 05:46:45 np0005479822 bash[83706]: Writing manifest to image destination
Oct 10 05:46:45 np0005479822 podman[83706]: 2025-10-10 09:46:45.645737804 +0000 UTC m=+1.219258815 container create db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:46:45 np0005479822 podman[83706]: 2025-10-10 09:46:45.632815083 +0000 UTC m=+1.206336124 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 10 05:46:45 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e94d6beb7318b535b36ba4007be4f72a83f864adf9419ced3dd0ad671753a888/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 10 05:46:45 np0005479822 podman[83706]: 2025-10-10 09:46:45.71121401 +0000 UTC m=+1.284735031 container init db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:46:45 np0005479822 podman[83706]: 2025-10-10 09:46:45.719621498 +0000 UTC m=+1.293142509 container start db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:46:45 np0005479822 bash[83706]: db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.731Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.732Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.733Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 10 05:46:45 np0005479822 systemd[1]: Started Ceph node-exporter.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=arp
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=bcache
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=bonding
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=cpu
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=dmi
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=edac
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=entropy
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=filefd
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netclass
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netdev
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netstat
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nfs
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nvme
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=os
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=pressure
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=rapl
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=selinux
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=softnet
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=stat
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=textfile
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=time
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=uname
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=xfs
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=zfs
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.740Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 10 05:46:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.740Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 10 05:46:46 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/1088819812' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 05:46:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:47 np0005479822 ceph-mon[79167]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 10 05:46:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:46:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:46:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:46:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-2.qujzwn on compute-2
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.353176488 +0000 UTC m=+0.068297701 container create 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 05:46:55 np0005479822 systemd[1]: Started libpod-conmon-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope.
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.322186614 +0000 UTC m=+0.037307827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:46:55 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.457783255 +0000 UTC m=+0.172904498 container init 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.471870069 +0000 UTC m=+0.186991262 container start 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.476249859 +0000 UTC m=+0.191371202 container attach 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 05:46:55 np0005479822 sad_curran[83899]: 167 167
Oct 10 05:46:55 np0005479822 systemd[1]: libpod-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope: Deactivated successfully.
Oct 10 05:46:55 np0005479822 conmon[83899]: conmon 0baaccc9608734db107f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope/container/memory.events
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.481692677 +0000 UTC m=+0.196813870 container died 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:46:55 np0005479822 systemd[1]: var-lib-containers-storage-overlay-7af365daf727c63a5dbb1d6f5e60569a23f86b8bd001baa3a66e7129d3d944a2-merged.mount: Deactivated successfully.
Oct 10 05:46:55 np0005479822 podman[83883]: 2025-10-10 09:46:55.530578527 +0000 UTC m=+0.245699730 container remove 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:46:55 np0005479822 systemd[1]: libpod-conmon-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope: Deactivated successfully.
Oct 10 05:46:55 np0005479822 systemd[1]: Reloading.
Oct 10 05:46:55 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:46:55 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:46:55 np0005479822 systemd[1]: Reloading.
Oct 10 05:46:56 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:46:56 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:46:56 np0005479822 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-1.zajetc on compute-1
Oct 10 05:46:56 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/2866042771' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 05:46:56 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 05:46:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 10 05:46:56 np0005479822 systemd[1]: Starting Ceph rgw.rgw.compute-1.zajetc for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:46:56 np0005479822 podman[84044]: 2025-10-10 09:46:56.450426453 +0000 UTC m=+0.051951986 container create f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:46:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:46:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:46:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:46:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.zajetc supports timestamps until 2038 (0x7fffffff)
Oct 10 05:46:56 np0005479822 podman[84044]: 2025-10-10 09:46:56.424211818 +0000 UTC m=+0.025737421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:46:56 np0005479822 podman[84044]: 2025-10-10 09:46:56.526917685 +0000 UTC m=+0.128443258 container init f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 05:46:56 np0005479822 podman[84044]: 2025-10-10 09:46:56.542591902 +0000 UTC m=+0.144117475 container start f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:46:56 np0005479822 bash[84044]: f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3
Oct 10 05:46:56 np0005479822 systemd[1]: Started Ceph rgw.rgw.compute-1.zajetc for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:46:56 np0005479822 radosgw[84063]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:46:56 np0005479822 radosgw[84063]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 10 05:46:56 np0005479822 radosgw[84063]: framework: beast
Oct 10 05:46:56 np0005479822 radosgw[84063]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct 10 05:46:56 np0005479822 radosgw[84063]: init_numa not setting numa affinity
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-0.myiozw on compute-0
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 10 05:46:57 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 10 05:46:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 05:46:58 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 05:46:58 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 05:46:58 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 05:46:58 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 05:46:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 10 05:46:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 05:46:59 np0005479822 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-2.vlgajy on compute-2
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e3 new map
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e3 print_map#012e3#012btime 2025-10-10T09:47:00:211513+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:46:34.511367+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.vlgajy{-1:24337} state up:standby seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e4 new map
Oct 10 05:47:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e4 print_map#012e4#012btime 2025-10-10T09:47:00:244509+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:00.244232+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.vlgajy{0:24337} state up:creating seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: daemon mds.cephfs.compute-2.vlgajy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: daemon mds.cephfs.compute-2.vlgajy is now active in filesystem cephfs as rank 0
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-0.cchwlo on compute-0
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e5 new map
Oct 10 05:47:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e5 print_map#012e5#012btime 2025-10-10T09:47:01:287113+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:01.287110+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct 10 05:47:01 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 10 05:47:02 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e6 new map
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e6 print_map#012e6#012btime 2025-10-10T09:47:02:297566+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:01.287110+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e7 new map
Oct 10 05:47:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e7 print_map#012e7#012btime 2025-10-10T09:47:02:322797+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:01.287110+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:02 np0005479822 podman[84742]: 2025-10-10 09:47:02.905008171 +0000 UTC m=+0.062318969 container create 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 05:47:02 np0005479822 systemd[1]: Started libpod-conmon-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope.
Oct 10 05:47:02 np0005479822 podman[84742]: 2025-10-10 09:47:02.881900111 +0000 UTC m=+0.039210939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:47:02 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:47:03 np0005479822 podman[84742]: 2025-10-10 09:47:03.016104615 +0000 UTC m=+0.173415473 container init 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 05:47:03 np0005479822 podman[84742]: 2025-10-10 09:47:03.028524863 +0000 UTC m=+0.185835691 container start 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:47:03 np0005479822 podman[84742]: 2025-10-10 09:47:03.032357497 +0000 UTC m=+0.189668325 container attach 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:47:03 np0005479822 busy_liskov[84758]: 167 167
Oct 10 05:47:03 np0005479822 systemd[1]: libpod-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope: Deactivated successfully.
Oct 10 05:47:03 np0005479822 podman[84742]: 2025-10-10 09:47:03.037512988 +0000 UTC m=+0.194823776 container died 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:47:03 np0005479822 systemd[1]: var-lib-containers-storage-overlay-26b67ae4d41d8b149e8dcfe5d415a35dff9e2ea8b73b90a48bc57d54e61dc0e3-merged.mount: Deactivated successfully.
Oct 10 05:47:03 np0005479822 podman[84742]: 2025-10-10 09:47:03.091550589 +0000 UTC m=+0.248861387 container remove 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 05:47:03 np0005479822 systemd[1]: libpod-conmon-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope: Deactivated successfully.
Oct 10 05:47:03 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-1.fhagzt on compute-1
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 05:47:03 np0005479822 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 05:47:03 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:03 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:03 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:03 np0005479822 radosgw[84063]: v1 topic migration: starting v1 topic migration..
Oct 10 05:47:03 np0005479822 radosgw[84063]: LDAP not started since no server URIs were provided in the configuration.
Oct 10 05:47:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc[84059]: 2025-10-10T09:47:03.491+0000 7f55a02c7980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 10 05:47:03 np0005479822 radosgw[84063]: v1 topic migration: finished v1 topic migration
Oct 10 05:47:03 np0005479822 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 10 05:47:03 np0005479822 radosgw[84063]: framework: beast
Oct 10 05:47:03 np0005479822 radosgw[84063]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 10 05:47:03 np0005479822 radosgw[84063]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 10 05:47:03 np0005479822 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 10 05:47:03 np0005479822 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 10 05:47:03 np0005479822 radosgw[84063]: starting handler: beast
Oct 10 05:47:03 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:03 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:03 np0005479822 radosgw[84063]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:47:03 np0005479822 radosgw[84063]: mgrc service_daemon_register rgw.24185 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.zajetc,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=ac475a20-bf0e-4531-bd8b-a44afde7c93f,zone_name=default,zonegroup_id=8929b431-04ce-48e1-bb4a-cedab812d97d,zonegroup_name=default}
Oct 10 05:47:03 np0005479822 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 10 05:47:03 np0005479822 systemd[1]: Starting Ceph mds.cephfs.compute-1.fhagzt for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:47:04 np0005479822 podman[84936]: 2025-10-10 09:47:04.051561897 +0000 UTC m=+0.043571047 container create 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 05:47:04 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:04 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:04 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:04 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.fhagzt supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:04 np0005479822 podman[84936]: 2025-10-10 09:47:04.125531071 +0000 UTC m=+0.117540231 container init 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:47:04 np0005479822 podman[84936]: 2025-10-10 09:47:04.032170589 +0000 UTC m=+0.024179759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:47:04 np0005479822 podman[84936]: 2025-10-10 09:47:04.131585446 +0000 UTC m=+0.123594596 container start 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 05:47:04 np0005479822 bash[84936]: 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd
Oct 10 05:47:04 np0005479822 systemd[1]: Started Ceph mds.cephfs.compute-1.fhagzt for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: main not setting numa affinity
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: pidfile_write: ignore empty --pid-file
Oct 10 05:47:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt[84952]: starting mds.cephfs.compute-1.fhagzt at 
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 7 from mon.2
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e8 new map
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e8 print_map#012e8#012btime 2025-10-10T09:47:04:615775+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:04.295946+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 8 from mon.2
Oct 10 05:47:04 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Monitors have assigned me to become a standby
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: Cluster is now healthy
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:47:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.287876808 +0000 UTC m=+0.060820666 container create 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 05:47:05 np0005479822 systemd[1]: Started libpod-conmon-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope.
Oct 10 05:47:05 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.267503993 +0000 UTC m=+0.040447871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.371729281 +0000 UTC m=+0.144673209 container init 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.38053018 +0000 UTC m=+0.153474058 container start 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.38416387 +0000 UTC m=+0.157107748 container attach 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 05:47:05 np0005479822 reverent_lederberg[85080]: 167 167
Oct 10 05:47:05 np0005479822 systemd[1]: libpod-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope: Deactivated successfully.
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.38601587 +0000 UTC m=+0.158959708 container died 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:47:05 np0005479822 systemd[1]: var-lib-containers-storage-overlay-4c9c13188d60f3db206276a0db2b67e343a32bdf48d020eefa5ed117aad858c2-merged.mount: Deactivated successfully.
Oct 10 05:47:05 np0005479822 podman[85064]: 2025-10-10 09:47:05.438070998 +0000 UTC m=+0.211014876 container remove 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 05:47:05 np0005479822 systemd[1]: libpod-conmon-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope: Deactivated successfully.
Oct 10 05:47:05 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:05 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:05 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx-rgw
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Bind address in nfs.cephfs.0.0.compute-1.mssvzx's ganesha conf is defaulting to empty
Oct 10 05:47:05 np0005479822 ceph-mon[79167]: Deploying daemon nfs.cephfs.0.0.compute-1.mssvzx on compute-1
Oct 10 05:47:05 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:05 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:05 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:06 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:47:06 np0005479822 podman[85226]: 2025-10-10 09:47:06.513858098 +0000 UTC m=+0.059953333 container create 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:47:06 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:06 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:06 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:06 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:06 np0005479822 podman[85226]: 2025-10-10 09:47:06.493254007 +0000 UTC m=+0.039349252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:47:06 np0005479822 podman[85226]: 2025-10-10 09:47:06.612138984 +0000 UTC m=+0.158234249 container init 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct 10 05:47:06 np0005479822 podman[85226]: 2025-10-10 09:47:06.62189399 +0000 UTC m=+0.167989235 container start 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 05:47:06 np0005479822 bash[85226]: 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 05:47:06 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 05:47:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e9 new map
Oct 10 05:47:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e9 print_map#012e9#012btime 2025-10-10T09:47:06:672904+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:04.295946+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:47:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 05:47:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 05:47:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e10 new map
Oct 10 05:47:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).mds e10 print_map#012e10#012btime 2025-10-10T09:47:08:789045+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-10T09:46:34.511367+0000#012modified#0112025-10-10T09:47:04.295946+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24337}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24337 members: 24337#012[mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 05:47:08 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 10 from mon.2
Oct 10 05:47:08 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 05:47:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy-rgw
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: Bind address in nfs.cephfs.1.0.compute-2.boccfy's ganesha conf is defaulting to empty
Oct 10 05:47:10 np0005479822 ceph-mon[79167]: Deploying daemon nfs.cephfs.1.0.compute-2.boccfy on compute-2
Oct 10 05:47:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:47:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:47:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:47:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo-rgw
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Bind address in nfs.cephfs.2.0.compute-0.ruydzo's ganesha conf is defaulting to empty
Oct 10 05:47:12 np0005479822 ceph-mon[79167]: Deploying daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0
Oct 10 05:47:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:47:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:47:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:47:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:47:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:14 np0005479822 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-1.ehhoyw on compute-1
Oct 10 05:47:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:17 np0005479822 podman[85385]: 2025-10-10 09:47:17.443696862 +0000 UTC m=+2.828421610 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 05:47:17 np0005479822 podman[85385]: 2025-10-10 09:47:17.465420113 +0000 UTC m=+2.850144791 container create 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 systemd[1]: Started libpod-conmon-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope.
Oct 10 05:47:17 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:47:17 np0005479822 podman[85385]: 2025-10-10 09:47:17.551317202 +0000 UTC m=+2.936041960 container init 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 podman[85385]: 2025-10-10 09:47:17.563390331 +0000 UTC m=+2.948115029 container start 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 podman[85385]: 2025-10-10 09:47:17.567411091 +0000 UTC m=+2.952135789 container attach 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 trusting_antonelli[85502]: 0 0
Oct 10 05:47:17 np0005479822 systemd[1]: libpod-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope: Deactivated successfully.
Oct 10 05:47:17 np0005479822 conmon[85502]: conmon 2802889e2879f73e6998 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope/container/memory.events
Oct 10 05:47:17 np0005479822 podman[85507]: 2025-10-10 09:47:17.610549265 +0000 UTC m=+0.028285031 container died 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 systemd[1]: var-lib-containers-storage-overlay-539fc5640b4738cd9075a771a9d807e4902716255b4160d5c8e9dc269ef3517a-merged.mount: Deactivated successfully.
Oct 10 05:47:17 np0005479822 podman[85507]: 2025-10-10 09:47:17.656906647 +0000 UTC m=+0.074642413 container remove 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 05:47:17 np0005479822 systemd[1]: libpod-conmon-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope: Deactivated successfully.
Oct 10 05:47:17 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:17 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:17 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:18 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:18 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:18 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:18 np0005479822 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.ehhoyw for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:47:18 np0005479822 podman[85655]: 2025-10-10 09:47:18.687458806 +0000 UTC m=+0.060999162 container create 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:47:18 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:18 np0005479822 podman[85655]: 2025-10-10 09:47:18.655881506 +0000 UTC m=+0.029421902 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 05:47:18 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9184a1463110283b415ebe1aaffb56f883db14b6210305024f4070f5289d465f/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:18 np0005479822 podman[85655]: 2025-10-10 09:47:18.796484544 +0000 UTC m=+0.170024950 container init 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:47:18 np0005479822 podman[85655]: 2025-10-10 09:47:18.807104093 +0000 UTC m=+0.180644439 container start 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:47:18 np0005479822 bash[85655]: 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562
Oct 10 05:47:18 np0005479822 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.ehhoyw for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:47:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/094718 (2) : New worker #1 (4) forked
Oct 10 05:47:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:19 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:19 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:19 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:19 np0005479822 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-0.gptveb on compute-0
Oct 10 05:47:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:24 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:24 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:24 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:24 np0005479822 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-2.eokdol on compute-2
Oct 10 05:47:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.418467) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648418681, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6471, "num_deletes": 255, "total_data_size": 17860428, "memory_usage": 19297984, "flush_reason": "Manual Compaction"}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648485420, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11386860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6476, "table_properties": {"data_size": 11362441, "index_size": 15281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 78408, "raw_average_key_size": 24, "raw_value_size": 11300778, "raw_average_value_size": 3507, "num_data_blocks": 678, "num_entries": 3222, "num_filter_entries": 3222, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 1760089522, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 67016 microseconds, and 38051 cpu microseconds.
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.485501) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11386860 bytes OK
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.485527) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486933) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486956) EVENT_LOG_v1 {"time_micros": 1760089648486949, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486976) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 17825352, prev total WAL file size 17825352, number of live WAL files 2.
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.494106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(10MB) 8(1648B)]
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648494250, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11388508, "oldest_snapshot_seqno": -1}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-2.fcbgvm on compute-2
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2971 keys, 11383426 bytes, temperature: kUnknown
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648556587, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11383426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359573, "index_size": 15296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7493, "raw_key_size": 74983, "raw_average_key_size": 25, "raw_value_size": 11301058, "raw_average_value_size": 3803, "num_data_blocks": 677, "num_entries": 2971, "num_filter_entries": 2971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.556927) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11383426 bytes
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.558769) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.3 rd, 182.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(10.9, 0.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3227, records dropped: 256 output_compression: NoCompression
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.558803) EVENT_LOG_v1 {"time_micros": 1760089648558787, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62466, "compaction_time_cpu_micros": 29125, "output_level": 6, "num_output_files": 1, "total_output_size": 11383426, "num_input_records": 3227, "num_output_records": 2971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648562595, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648562680, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 10 05:47:28 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.493956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:32 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:32 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:32 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-1.twbftp on compute-1
Oct 10 05:47:33 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 10 05:47:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:34 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:35 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 10 05:47:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:35 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:35 np0005479822 podman[85779]: 2025-10-10 09:47:35.936597344 +0000 UTC m=+2.680742729 container create a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, distribution-scope=public, version=2.2.4)
Oct 10 05:47:35 np0005479822 systemd[1]: Started libpod-conmon-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope.
Oct 10 05:47:35 np0005479822 podman[85779]: 2025-10-10 09:47:35.914944584 +0000 UTC m=+2.659090009 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 05:47:36 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:47:36 np0005479822 podman[85779]: 2025-10-10 09:47:36.033485771 +0000 UTC m=+2.777631166 container init a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 05:47:36 np0005479822 podman[85779]: 2025-10-10 09:47:36.04479979 +0000 UTC m=+2.788945205 container start a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, release=1793, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph)
Oct 10 05:47:36 np0005479822 podman[85779]: 2025-10-10 09:47:36.04921363 +0000 UTC m=+2.793359025 container attach a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 05:47:36 np0005479822 inspiring_jepsen[85874]: 0 0
Oct 10 05:47:36 np0005479822 systemd[1]: libpod-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope: Deactivated successfully.
Oct 10 05:47:36 np0005479822 podman[85779]: 2025-10-10 09:47:36.054659678 +0000 UTC m=+2.798805093 container died a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 10 05:47:36 np0005479822 systemd[1]: var-lib-containers-storage-overlay-c3d0f156852c5eac93778730723a2ee92bee8375bd914e5d68babdf4a608dfe3-merged.mount: Deactivated successfully.
Oct 10 05:47:36 np0005479822 podman[85779]: 2025-10-10 09:47:36.090187186 +0000 UTC m=+2.834332551 container remove a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Oct 10 05:47:36 np0005479822 systemd[1]: libpod-conmon-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope: Deactivated successfully.
Oct 10 05:47:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:47:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:47:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=11.137809753s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active pruub 172.869598389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=11.137809753s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown pruub 172.869598389s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:36 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:36 np0005479822 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.twbftp for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:47:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:37 np0005479822 podman[86019]: 2025-10-10 09:47:37.1400784 +0000 UTC m=+0.070055648 container create 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:37 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813f4f3e98515d1f49d118a58d4d31316c669b35b8f3d9d42503c7dcdcd53760/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:47:37 np0005479822 podman[86019]: 2025-10-10 09:47:37.202141861 +0000 UTC m=+0.132119149 container init 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, distribution-scope=public, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Oct 10 05:47:37 np0005479822 podman[86019]: 2025-10-10 09:47:37.208533175 +0000 UTC m=+0.138510433 container start 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, release=1793)
Oct 10 05:47:37 np0005479822 podman[86019]: 2025-10-10 09:47:37.113356983 +0000 UTC m=+0.043334261 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 05:47:37 np0005479822 bash[86019]: 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:37 np0005479822 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.twbftp for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Starting VRRP child process, pid=4
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Startup complete
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: (VI_0) Entering BACKUP STATE (init)
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: VRRP_Script(check_backend) succeeded
Oct 10 05:47:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[10.0( v 51'1091 (0'0,51'1091] local-lis/les=45/46 n=178 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.400810242s) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 51'1090 active pruub 171.151397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.12( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:37 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.17( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.0( empty local-lis/les=54/56 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.7( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.19( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[10.0( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.400810242s) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 0'0 unknown pruub 171.151397705s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dab248 space 0x55b096d265c0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf388 space 0x55b096d27390 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf068 space 0x55b096d26f80 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e348 space 0x55b096d272c0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9ff68 space 0x55b096de3120 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3c48 space 0x55b096d260e0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4d248 space 0x55b096d361b0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d827a8 space 0x55b096d0be20 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9f248 space 0x55b096d26690 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d837e8 space 0x55b096d276d0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3ba8 space 0x55b096d26aa0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9f928 space 0x55b096d36010 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d82ca8 space 0x55b096d27460 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3108 space 0x55b096c9b460 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9eca8 space 0x55b096de2760 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dd7f68 space 0x55b096d26b70 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4cc08 space 0x55b096d27a10 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d831a8 space 0x55b096d27600 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096daae88 space 0x55b0966609d0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf6a8 space 0x55b096d27050 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddfba8 space 0x55b096d27120 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9fc48 space 0x55b096de2900 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dabd88 space 0x55b096c9b530 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096db0168 space 0x55b096c781b0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4c668 space 0x55b096d27870 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d83e28 space 0x55b096d277a0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddeb68 space 0x55b096d26eb0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e848 space 0x55b096d26760 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096daade8 space 0x55b095aef7a0 0x0~1000 clean)
Oct 10 05:47:37 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e3e8 space 0x55b096d360e0 0x0~1000 clean)
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1b( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.18( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.12( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1f( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.7( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.10( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.11( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1e( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1d( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1c( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1a( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.19( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.6( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.5( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.4( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.3( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.8( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.d( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.b( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.9( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.c( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.e( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.f( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.2( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.13( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.14( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.15( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.a( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.16( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.17( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-0.mciijj on compute-0
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 05:47:38 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.5( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.4( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.0( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 10 05:47:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 10 05:47:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 10 05:47:39 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 10 05:47:39 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 58 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.429636002s) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 175.213150024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:39 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 58 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.429636002s) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown pruub 175.213150024s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:39 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:39 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 10 05:47:40 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.11( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.13( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.15( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.4( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.7( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.9( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.f( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.d( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.5( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.2( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.3( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1f( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1a( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1b( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.18( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.16( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.14( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1e( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1d( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.17( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.15( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.9( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.f( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.5( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.0( empty local-lis/les=58/59 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1f( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.14( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.16( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:40 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 05:47:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Entering MASTER STATE
Oct 10 05:47:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Oct 10 05:47:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Entering BACKUP STATE
Oct 10 05:47:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 10 05:47:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 10 05:47:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 10 05:47:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 10 05:47:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 10 05:47:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:43 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:43 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:43 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:43 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 10 05:47:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 10 05:47:44 np0005479822 ceph-mon[79167]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 10 05:47:44 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct 10 05:47:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct 10 05:47:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 10 05:47:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.713502884s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.796554565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690283775s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773452759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.713395119s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.796554565s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690241814s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773452759s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715754509s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799255371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715714455s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799255371s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.682233810s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765914917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.682155609s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765914917s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715379715s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799240112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715338707s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799240112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.681708336s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765869141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.681674957s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765869141s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.714180946s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799240112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.714117050s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799240112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687976837s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773345947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687910080s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773345947s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678252220s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765853882s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678228378s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765853882s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.685546875s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773361206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.685509682s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773361206s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678056717s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765991211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678024292s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765991211s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.679808617s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.768020630s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.679785728s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.768020630s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.711056709s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799453735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.711032867s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799453735s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710590363s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799346924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710541725s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710323334s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799301147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.9( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710341454s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 active pruub 181.799392700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710278511s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799301147s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.676536560s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765686035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.9( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710184097s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.676502228s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765686035s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.675749779s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765289307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.678380966s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767974854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.675717354s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765289307s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709904671s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799392700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.678343773s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767974854s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709317207s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799453735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709264755s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709279060s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799453735s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709352493s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799697876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709320068s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799697876s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709117889s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799682617s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709087372s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799682617s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676994324s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767807007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676941872s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767807007s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.708835602s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799758911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.708807945s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799758911s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676640511s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767822266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673544884s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764770508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676595688s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767822266s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673506737s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764770508s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676552773s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767883301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.674350739s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765686035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676523209s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767883301s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.674324036s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765686035s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673120499s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764724731s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.675526619s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767166138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673097610s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764724731s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.675501823s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767166138s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707875252s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799819946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707849503s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799819946s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672030449s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764068604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671961784s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764053345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672005653s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764068604s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.674842834s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=57'1092 lcod 59'1093 mlcod 59'1093 active pruub 179.767105103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671931267s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764053345s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.674800873s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 unknown NOTIFY pruub 179.767105103s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707437515s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799835205s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707401276s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799835205s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671418190s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763946533s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671397209s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763946533s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671131134s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763946533s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671073914s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763946533s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672192574s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765106201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672147751s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765106201s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673967361s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767059326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706785202s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799942017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673920631s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767059326s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706767082s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799942017s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706682205s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799987793s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706666946s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799987793s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669240952s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762603760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669212341s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762603760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669639587s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763214111s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669616699s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763214111s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673401833s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767013550s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673379898s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767013550s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.668558121s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762359619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.668535233s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762359619s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.18( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706106186s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 active pruub 181.800003052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.18( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706048965s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.800003052s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706009865s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800003052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705970764s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800003052s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672766685s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.766891479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672748566s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.766891479s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660766602s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755142212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660737991s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755142212s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660465240s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755126953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660358429s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755111694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672463417s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767181396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660330772s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755111694s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672400475s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767181396s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705061913s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800109863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705037117s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800109863s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704943657s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800125122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704910278s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800125122s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.667339325s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.762603760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.667314529s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.762603760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704752922s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800155640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704732895s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800155640s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.671610832s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767150879s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.671586990s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767150879s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660412788s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755126953s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.667587280s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762969971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.666921616s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762969971s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 10 05:47:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 10 05:47:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.15( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.10( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.d( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.14( v 60'57 lc 60'56 (0'0,60'57] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=60'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.f( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.a( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.6( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.e( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=61/62 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.1b( v 51'44 lc 51'8 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.18( v 51'44 lc 51'18 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.10( v 57'47 lc 51'14 (0'0,57'47] local-lis/les=61/62 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=57'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.11( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.12( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: Regenerating cephadm self-signed grafana TLS certificates
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: Deploying daemon grafana.compute-0 on compute-0
Oct 10 05:47:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.684146881s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.773544312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.684079170s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.773544312s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.678178787s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.768295288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.678135872s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.768295288s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.677606583s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.768310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.677490234s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.768310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.682567596s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.773513794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.682549477s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.773513794s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.674398422s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767364502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.674337387s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767364502s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673968315s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767318726s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673941612s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767318726s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673541069s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673457146s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767242432s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.672980309s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767227173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.672939301s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767227173s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=59'1094 lcod 59'1093 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:48 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 05:47:48 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 05:47:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.702158928s) [2] async=[2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.096115112s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.702087402s) [2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.096115112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.708526611s) [2] async=[2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103134155s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:48 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.708438873s) [2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103134155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:49 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466288567s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103683472s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466186523s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103683472s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470971107s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109085083s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470538139s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.108795166s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470425606s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.108795166s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470301628s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109085083s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469999313s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109008789s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469942093s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109008789s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464769363s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103912354s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463806152s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103195190s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464865685s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.104278564s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464645386s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103912354s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463734627s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103195190s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464756966s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.104278564s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469159126s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.108825684s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.468850136s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.108825684s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463052750s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103561401s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463002205s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103561401s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.5( v 64'1098 (0'0,64'1098] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463479042s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=59'1094 lcod 64'1097 mlcod 64'1097 active pruub 189.104110718s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463605881s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.104019165s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.5( v 64'1098 (0'0,64'1098] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463342667s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=59'1094 lcod 64'1097 mlcod 0'0 unknown NOTIFY pruub 189.104110718s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463195801s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.104019165s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.462391853s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103607178s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.462322235s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103607178s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466937065s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109222412s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466878891s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109222412s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466481209s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109191895s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466403961s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109191895s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:49 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:49 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.285489082s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961334229s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.285408020s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961334229s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.281500816s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.957702637s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.281429291s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.957702637s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284450531s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.960998535s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284386635s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.960998535s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284519196s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961502075s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284316063s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961502075s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283745766s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961090088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283687592s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961090088s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283139229s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.960800171s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283920288s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961593628s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283873558s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961593628s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283022881s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.960800171s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283117294s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961242676s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:50 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283035278s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961242676s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:51 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 10 05:47:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 10 05:47:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 10 05:47:54 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Oct 10 05:47:54 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Oct 10 05:47:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.933684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674933710, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1077, "num_deletes": 251, "total_data_size": 1849896, "memory_usage": 1870896, "flush_reason": "Manual Compaction"}
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674941783, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1187808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6481, "largest_seqno": 7553, "table_properties": {"data_size": 1182537, "index_size": 2603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13502, "raw_average_key_size": 21, "raw_value_size": 1171020, "raw_average_value_size": 1847, "num_data_blocks": 115, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089648, "oldest_key_time": 1760089648, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8319 microseconds, and 5075 cpu microseconds.
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.942000) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1187808 bytes OK
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.942018) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943433) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943458) EVENT_LOG_v1 {"time_micros": 1760089674943450, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943477) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1844134, prev total WAL file size 1844134, number of live WAL files 2.
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.944602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1159KB)], [15(10MB)]
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674944638, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12571234, "oldest_snapshot_seqno": -1}
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3078 keys, 11335147 bytes, temperature: kUnknown
Oct 10 05:47:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674998860, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11335147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11310656, "index_size": 15678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 79237, "raw_average_key_size": 25, "raw_value_size": 11249999, "raw_average_value_size": 3654, "num_data_blocks": 685, "num_entries": 3078, "num_filter_entries": 3078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.999086) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11335147 bytes
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.000310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.5 rd, 208.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(20.1) write-amplify(9.5) OK, records in: 3605, records dropped: 527 output_compression: NoCompression
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.000341) EVENT_LOG_v1 {"time_micros": 1760089675000333, "job": 6, "event": "compaction_finished", "compaction_time_micros": 54297, "compaction_time_cpu_micros": 28389, "output_level": 6, "num_output_files": 1, "total_output_size": 11335147, "num_input_records": 3605, "num_output_records": 3078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675000614, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675002482, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.944480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:47:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:55 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 10 05:47:55 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 10 05:47:56 np0005479822 ceph-mon[79167]: Deploying daemon haproxy.rgw.default.compute-0.ofnenu on compute-0
Oct 10 05:47:56 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 05:47:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 10 05:47:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 10 05:47:56 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 10 05:47:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:57 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 05:47:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:47:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:57 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 10 05:47:57 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 10 05:47:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:47:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000067s ======
Oct 10 05:47:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:57.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000067s
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: Deploying daemon haproxy.rgw.default.compute-2.mhdkdo on compute-2
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484390259s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.768478394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484338760s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.768478394s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484121323s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.768478394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484080315s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.768478394s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482593536s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=60'1098 lcod 60'1097 mlcod 60'1097 active pruub 195.767684937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482537270s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 unknown NOTIFY pruub 195.767684937s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482146263s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.767547607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482050896s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.767547607s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:47:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 10 05:47:58 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 10 05:47:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: Deploying daemon keepalived.rgw.default.compute-2.bbeizy on compute-2
Oct 10 05:47:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 10 05:47:59 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:59 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:59 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=70/71 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:59 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:47:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:47:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:47:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:47:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:47:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:47:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:47:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:47:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 10 05:48:00 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.4( v 71'1102 (0'0,71'1102] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967370987s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=60'1098 lcod 71'1101 mlcod 71'1101 active pruub 200.424423218s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967158318s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.424346924s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.4( v 71'1102 (0'0,71'1102] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967208862s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=60'1098 lcod 71'1101 mlcod 0'0 unknown NOTIFY pruub 200.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967059135s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.966670990s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.424346924s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.961228371s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.419113159s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.966114044s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.960394859s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.419113159s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct 10 05:48:00 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct 10 05:48:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 05:48:01 np0005479822 ceph-mon[79167]: Deploying daemon keepalived.rgw.default.compute-0.igkrok on compute-0
Oct 10 05:48:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:01 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 10 05:48:01 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 10 05:48:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:01.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:02 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 10 05:48:02 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 10 05:48:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 10 05:48:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 10 05:48:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 10 05:48:03 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 10 05:48:03 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:03 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:03 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:03 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 10 05:48:04 np0005479822 ceph-mon[79167]: Deploying daemon prometheus.compute-0 on compute-0
Oct 10 05:48:04 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:05 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.f deep-scrub starts
Oct 10 05:48:05 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.f deep-scrub ok
Oct 10 05:48:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 10 05:48:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:05 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 05:48:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Oct 10 05:48:06 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 05:48:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:06 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.d scrub starts
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.d scrub ok
Oct 10 05:48:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:07 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 05:48:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:07 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:07 np0005479822 systemd-logind[789]: New session 37 of user zuul.
Oct 10 05:48:07 np0005479822 systemd[1]: Started Session 37 of User zuul.
Oct 10 05:48:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Oct 10 05:48:08 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Oct 10 05:48:08 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 05:48:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:08 np0005479822 python3.9[86215]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:48:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:09 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 10 05:48:09 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  5: 'ceph'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  6: '--setgroup'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  7: 'ceph'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr respawn  exe_path /proc/self/exe
Oct 10 05:48:09 np0005479822 systemd[1]: session-35.scope: Deactivated successfully.
Oct 10 05:48:09 np0005479822 systemd[1]: session-35.scope: Consumed 23.239s CPU time.
Oct 10 05:48:09 np0005479822 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Oct 10 05:48:09 np0005479822 systemd-logind[789]: Removed session 35.
Oct 10 05:48:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 05:48:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 05:48:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:09.981+0000 7f73f2c98140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 05:48:09 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 05:48:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:10.062+0000 7f73f2c98140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:48:10 np0005479822 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 05:48:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 05:48:10 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Oct 10 05:48:10 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Oct 10 05:48:10 np0005479822 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 10 05:48:10 np0005479822 python3.9[86461]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:48:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 05:48:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:10.860+0000 7f73f2c98140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:48:10 np0005479822 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 05:48:10 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 05:48:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 05:48:11 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.491+0000 7f73f2c98140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 10 05:48:11 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 10 05:48:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:11.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:  from numpy import show_config as show_numpy_config
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.663+0000 7f73f2c98140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.734+0000 7f73f2c98140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 05:48:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.878+0000 7f73f2c98140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 05:48:11 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 05:48:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:11.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 05:48:12 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 05:48:12 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 05:48:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:12.844+0000 7f73f2c98140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 05:48:12 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 05:48:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.056+0000 7f73f2c98140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.129+0000 7f73f2c98140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.196+0000 7f73f2c98140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.277+0000 7f73f2c98140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.354+0000 7f73f2c98140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:13 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.16 deep-scrub starts
Oct 10 05:48:13 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.16 deep-scrub ok
Oct 10 05:48:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.691+0000 7f73f2c98140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.787+0000 7f73f2c98140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 05:48:13 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 05:48:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:13.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 05:48:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.227+0000 7f73f2c98140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 05:48:14 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Oct 10 05:48:14 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Oct 10 05:48:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.776+0000 7f73f2c98140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 05:48:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.842+0000 7f73f2c98140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 05:48:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.915+0000 7f73f2c98140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 05:48:14 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.055+0000 7f73f2c98140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.129+0000 7f73f2c98140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.275+0000 7f73f2c98140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.479+0000 7f73f2c98140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:15 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Oct 10 05:48:15 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.741+0000 7f73f2c98140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.807+0000 7f73f2c98140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: mgr load Constructed class from module: prometheus
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO root] Starting engine...
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Bus STARTING
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Bus STARTING
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: CherryPy Checker:
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: The Application mounted at '' has an empty config.
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5626eff61860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Serving on http://:::9283
Oct 10 05:48:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Bus STARTED
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Serving on http://:::9283
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Bus STARTED
Oct 10 05:48:15 np0005479822 ceph-mgr[79476]: [prometheus INFO root] Engine started.
Oct 10 05:48:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 10 05:48:16 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 10 05:48:16 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 10 05:48:16 np0005479822 systemd-logind[789]: New session 38 of user ceph-admin.
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 05:48:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 05:48:16 np0005479822 systemd[1]: Started Session 38 of User ceph-admin.
Oct 10 05:48:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:17 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 10 05:48:17 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 10 05:48:17 np0005479822 podman[86646]: 2025-10-10 09:48:17.731810635 +0000 UTC m=+0.122208023 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 10 05:48:17 np0005479822 podman[86646]: 2025-10-10 09:48:17.839894995 +0000 UTC m=+0.230292323 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:48:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:17.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:18 np0005479822 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Oct 10 05:48:18 np0005479822 systemd[1]: session-37.scope: Deactivated successfully.
Oct 10 05:48:18 np0005479822 systemd[1]: session-37.scope: Consumed 8.775s CPU time.
Oct 10 05:48:18 np0005479822 systemd-logind[789]: Removed session 37.
Oct 10 05:48:18 np0005479822 podman[86786]: 2025-10-10 09:48:18.43302946 +0000 UTC m=+0.072866886 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:48:18 np0005479822 podman[86786]: 2025-10-10 09:48:18.447730036 +0000 UTC m=+0.087567452 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 10 05:48:18 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.015638351s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 211.768310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.015065193s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.768310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.009293556s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 211.763580322s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:18 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.009239197s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.763580322s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:18 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 05:48:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:18 np0005479822 podman[86878]: 2025-10-10 09:48:18.920255755 +0000 UTC m=+0.089401754 container exec 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 05:48:18 np0005479822 podman[86878]: 2025-10-10 09:48:18.941749125 +0000 UTC m=+0.110895124 container exec_died 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 05:48:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:19 np0005479822 podman[86944]: 2025-10-10 09:48:19.266524567 +0000 UTC m=+0.077460210 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:48:19 np0005479822 podman[86944]: 2025-10-10 09:48:19.281046929 +0000 UTC m=+0.091982532 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:48:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 05:48:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:19.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 05:48:19 np0005479822 podman[87007]: 2025-10-10 09:48:19.616358092 +0000 UTC m=+0.085841873 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 05:48:19 np0005479822 podman[87007]: 2025-10-10 09:48:19.635908089 +0000 UTC m=+0.105391830 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=keepalived for Ceph, release=1793)
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:48:17] ENGINE Bus STARTING
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:48:17] ENGINE Serving on http://192.168.122.100:8765
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Serving on https://192.168.122.100:7150
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Bus STARTED
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Client ('192.168.122.100', 53560) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 05:48:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:19.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1] r=0 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1] r=0 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:20 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=15.009707451s) [0] async=[0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 221.819091797s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=15.009644508s) [0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.819091797s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=14.999447823s) [0] async=[0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 221.809829712s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:21 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=14.999012947s) [0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.809829712s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 05:48:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:48:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:21.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:22 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 10 05:48:22 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 10 05:48:22 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:22 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:48:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 05:48:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:23 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002c70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:23.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 10 05:48:23 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 10 05:48:23 np0005479822 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:48:23 np0005479822 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 05:48:23 np0005479822 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:48:23 np0005479822 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:48:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 10 05:48:24 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 91 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:24 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 91 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:24 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 10 05:48:24 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 10 05:48:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:25.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 10 05:48:25 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 10 05:48:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 05:48:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:48:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 10 05:48:26 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 93 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=92/93 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:26 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 93 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=92/93 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:26 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 10 05:48:26 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 10 05:48:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002c70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:27 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 10 05:48:27 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 10 05:48:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 05:48:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:27.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 05:48:28 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 10 05:48:28 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 10 05:48:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:29 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 10 05:48:29 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 05:48:29 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 94 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:29 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 94 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:29.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:29 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct 10 05:48:29 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct 10 05:48:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 05:48:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:29.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 05:48:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 05:48:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:30 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 10 05:48:30 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 10 05:48:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:31 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:31 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 05:48:31 np0005479822 ceph-mon[79167]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 10 05:48:31 np0005479822 ceph-mon[79167]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 10 05:48:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 10 05:48:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=96) [1] r=0 lpr=96 pi=[72,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=96) [1] r=0 lpr=96 pi=[72,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:31 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 10 05:48:31 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 10 05:48:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:31.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:32 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 05:48:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 10 05:48:32 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 10 05:48:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98) [1] r=0 lpr=98 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98) [1] r=0 lpr=98 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 10 05:48:33 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 10 05:48:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:34 np0005479822 systemd-logind[789]: New session 39 of user zuul.
Oct 10 05:48:34 np0005479822 systemd[1]: Started Session 39 of User zuul.
Oct 10 05:48:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 05:48:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct 10 05:48:34 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 100 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:34 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 100 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:34 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 10 05:48:34 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 10 05:48:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:34 np0005479822 python3.9[88294]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 05:48:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 05:48:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:35 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 10 05:48:35 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 10 05:48:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:36 np0005479822 python3.9[88469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:48:36 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct 10 05:48:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 102 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=101/102 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:36 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 102 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=101/102 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:36 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 10 05:48:36 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 10 05:48:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:37 np0005479822 python3.9[88626]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:48:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 10 05:48:37 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 10 05:48:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:38 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:48:38 np0005479822 python3.9[88779]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:48:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 10 05:48:38 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 10 05:48:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:39 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct 10 05:48:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 05:48:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 10 05:48:39 np0005479822 python3.9[88934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:48:39 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 10 05:48:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:39.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:40 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.961517334s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 active pruub 236.460647583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.961463928s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 236.460647583s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.959306717s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 active pruub 236.460693359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.959281921s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 236.460693359s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 05:48:40 np0005479822 python3.9[89084]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:48:40 np0005479822 network[89101]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:48:40 np0005479822 network[89102]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:48:40 np0005479822 network[89103]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 10 05:48:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:40 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 10 05:48:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 05:48:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 10 05:48:41 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 10 05:48:41 np0005479822 systemd[82226]: Starting Mark boot as successful...
Oct 10 05:48:41 np0005479822 systemd[82226]: Finished Mark boot as successful.
Oct 10 05:48:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 05:48:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 05:48:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=106 pruub=8.241539955s) [2] r=-1 lpr=106 pi=[56,106)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 235.768768311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=106 pruub=8.240906715s) [2] r=-1 lpr=106 pi=[56,106)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.768768311s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:42 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 05:48:42 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 10 05:48:42 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 10 05:48:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:43 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.092753410s) [2] async=[2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 51'1091 active pruub 243.531875610s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.092676163s) [2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 243.531875610s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.098297119s) [2] async=[2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 51'1091 active pruub 243.537689209s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.098007202s) [2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 243.537689209s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 10 05:48:43 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 10 05:48:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 05:48:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:43.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 05:48:44 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct 10 05:48:44 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 108 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:48:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:48:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 10 05:48:44 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 10 05:48:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:45 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct 10 05:48:45 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 109 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.000724792s) [2] async=[2] r=-1 lpr=109 pi=[56,109)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 245.454376221s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:48:45 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 109 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.000662804s) [2] r=-1 lpr=109 pi=[56,109)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.454376221s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:48:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:48:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Oct 10 05:48:45 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Oct 10 05:48:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:45.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct 10 05:48:46 np0005479822 python3.9[89395]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:48:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 10 05:48:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:46 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 10 05:48:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:47 np0005479822 python3.9[89545]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:48:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:47 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 10 05:48:47 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 10 05:48:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct 10 05:48:48 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 05:48:48 np0005479822 python3.9[89700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:48:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:48 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 10 05:48:48 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 10 05:48:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:48:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 05:48:49 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 10 05:48:49 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 10 05:48:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:50 np0005479822 python3.9[89859]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:48:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct 10 05:48:50 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 05:48:50 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 10 05:48:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:50 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 10 05:48:51 np0005479822 python3.9[89968]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:48:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:48:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 05:48:51 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct 10 05:48:51 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 10 05:48:51 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 10 05:48:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:51.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct 10 05:48:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 05:48:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:52 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 10 05:48:52 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 10 05:48:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct 10 05:48:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:53 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 05:48:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 10 05:48:53 np0005479822 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 10 05:48:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct 10 05:48:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct 10 05:48:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:55.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:48:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct 10 05:48:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 05:48:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:48:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:48:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:48:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 05:48:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct 10 05:48:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:48:59 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 05:48:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct 10 05:48:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:48:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:48:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:00 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 05:49:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct 10 05:49:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 05:49:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct 10 05:49:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct 10 05:49:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 05:49:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 05:49:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:08 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 05:49:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct 10 05:49:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:09 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 05:49:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:10.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:10 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 05:49:10 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct 10 05:49:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003b50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004500 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 05:49:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 05:49:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct 10 05:49:12 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 126 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=15.568570137s) [0] r=-1 lpr=126 pi=[90,126)/1 crt=51'1091 mlcod 0'0 active pruub 273.443084717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:12 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 126 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=15.568256378s) [0] r=-1 lpr=126 pi=[90,126)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 273.443084717s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003b50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:13 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct 10 05:49:13 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 127 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:13 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 127 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:49:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004520 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 05:49:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:14 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct 10 05:49:14 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 128 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] async=[0] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:49:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 05:49:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 05:49:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:15 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct 10 05:49:15 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.001054764s) [0] async=[0] r=-1 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 51'1091 active pruub 275.469635010s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:15 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.000988007s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 275.469635010s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094916 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:49:16 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct 10 05:49:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 05:49:16 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 130 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=130 pruub=12.303937912s) [0] r=-1 lpr=130 pi=[97,130)/1 crt=51'1091 mlcod 0'0 active pruub 274.105194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:16 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 130 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=130 pruub=12.303892136s) [0] r=-1 lpr=130 pi=[97,130)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 274.105194092s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:17 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 05:49:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct 10 05:49:17 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 131 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:17 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 131 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:49:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:18.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:18 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct 10 05:49:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004560 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 132 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] async=[0] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:49:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:19 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct 10 05:49:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133 pruub=15.850214958s) [0] async=[0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 51'1091 active pruub 280.424163818s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:19 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133 pruub=15.850142479s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 280.424163818s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:20.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:20 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct 10 05:49:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094921 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:49:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:21.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:23.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 05:49:24 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct 10 05:49:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:49:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:25 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 05:49:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 05:49:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct 10 05:49:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:27 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 05:49:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:49:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:49:28 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct 10 05:49:28 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 137 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=137 pruub=8.719421387s) [2] r=-1 lpr=137 pi=[80,137)/1 crt=51'1091 mlcod 0'0 active pruub 282.430541992s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:28 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 137 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=137 pruub=8.719359398s) [2] r=-1 lpr=137 pi=[80,137)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 282.430541992s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:28 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 05:49:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:29.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:29 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 05:49:29 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct 10 05:49:29 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 138 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:29 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 138 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 05:49:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:30 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct 10 05:49:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=139) [1] r=0 lpr=139 pi=[107,139)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:49:30 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 139 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] async=[2] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:49:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 05:49:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:49:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:31 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct 10 05:49:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=138/80 les/c/f=139/81/0 sis=140 pruub=15.002747536s) [2] async=[2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1091 mlcod 51'1091 active pruub 291.782257080s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=138/80 les/c/f=139/81/0 sis=140 pruub=15.002635002s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 291.782257080s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[107,140)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:31 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[107,140)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 05:49:31 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 05:49:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct 10 05:49:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct 10 05:49:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 142 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 05:49:33 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 142 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 05:49:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:33.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:34 np0005479822 python3.9[90340]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:49:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Oct 10 05:49:34 np0005479822 ceph-osd[76867]: osd.1 pg_epoch: 143 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=142/143 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 05:49:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:49:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.527240) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775527275, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3142, "num_deletes": 252, "total_data_size": 9637706, "memory_usage": 9778848, "flush_reason": "Manual Compaction"}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775573932, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6118759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7558, "largest_seqno": 10695, "table_properties": {"data_size": 6104678, "index_size": 9103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34854, "raw_average_key_size": 22, "raw_value_size": 6074200, "raw_average_value_size": 3949, "num_data_blocks": 395, "num_entries": 1538, "num_filter_entries": 1538, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089675, "oldest_key_time": 1760089675, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 46767 microseconds, and 21081 cpu microseconds.
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.574000) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6118759 bytes OK
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.574027) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575524) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575545) EVENT_LOG_v1 {"time_micros": 1760089775575538, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 9622516, prev total WAL file size 9622516, number of live WAL files 2.
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.578574) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(5975KB)], [18(10MB)]
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775578616, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17453906, "oldest_snapshot_seqno": -1}
Oct 10 05:49:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4078 keys, 13426087 bytes, temperature: kUnknown
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775673662, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13426087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13393583, "index_size": 21194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104167, "raw_average_key_size": 25, "raw_value_size": 13313676, "raw_average_value_size": 3264, "num_data_blocks": 912, "num_entries": 4078, "num_filter_entries": 4078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.673998) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13426087 bytes
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.675294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.8, 10.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(5.0) write-amplify(2.2) OK, records in: 4616, records dropped: 538 output_compression: NoCompression
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.675316) EVENT_LOG_v1 {"time_micros": 1760089775675304, "job": 8, "event": "compaction_finished", "compaction_time_micros": 95148, "compaction_time_cpu_micros": 46203, "output_level": 6, "num_output_files": 1, "total_output_size": 13426087, "num_input_records": 4616, "num_output_records": 4078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775676450, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775678604, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.578527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:49:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:35.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:36.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:36 np0005479822 python3.9[90628]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 05:49:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:37 np0005479822 python3.9[90781]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 05:49:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:49:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:49:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094938 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:49:38 np0005479822 python3.9[90933]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:49:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:39 np0005479822 python3.9[91086]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 05:49:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:49:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:41 np0005479822 python3.9[91238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:49:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003430 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:41.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:41 np0005479822 python3.9[91394]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:49:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:42 np0005479822 python3.9[91472]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:49:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370000d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370000d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094943 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:49:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:43.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:44 np0005479822 python3.9[91625]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 05:49:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003430 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:45 np0005479822 python3.9[91829]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 05:49:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:46 np0005479822 python3.9[92013]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 05:49:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:46 np0005479822 python3.9[92165]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 05:49:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:47.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:49:47 np0005479822 python3.9[92318]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:49:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:49:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:49 np0005479822 python3.9[92472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:49:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:50 np0005479822 python3.9[92648]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:49:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:51 np0005479822 python3.9[92727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:49:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:52 np0005479822 python3.9[92880]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:49:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:52 np0005479822 python3.9[92958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:49:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:53 np0005479822 python3.9[93111]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:49:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:49:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:49:54 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:54 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:49:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:55 np0005479822 python3.9[93288]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:49:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:56.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:56 np0005479822 python3.9[93440]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 05:49:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:49:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:57 np0005479822 python3.9[93591]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:49:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:49:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:49:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:59 np0005479822 python3.9[93743]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:49:59 np0005479822 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 05:49:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:59 np0005479822 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 05:49:59 np0005479822 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 05:49:59 np0005479822 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 05:49:59 np0005479822 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 05:49:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:49:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:49:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:49:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:00.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:00 np0005479822 python3.9[93905]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 05:50:00 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 05:50:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:02.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004030 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:50:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:50:03 np0005479822 python3.9[94059]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:50:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:04 np0005479822 python3.9[94213]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:50:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:05 np0005479822 systemd[1]: session-39.scope: Deactivated successfully.
Oct 10 05:50:05 np0005479822 systemd[1]: session-39.scope: Consumed 1min 8.481s CPU time.
Oct 10 05:50:05 np0005479822 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Oct 10 05:50:05 np0005479822 systemd-logind[789]: Removed session 39.
Oct 10 05:50:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004070 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:09.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:11 np0005479822 systemd-logind[789]: New session 40 of user zuul.
Oct 10 05:50:11 np0005479822 systemd[1]: Started Session 40 of User zuul.
Oct 10 05:50:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:50:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:11.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:50:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:12.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:12 np0005479822 python3.9[94422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:13 np0005479822 python3.9[94580]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 05:50:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:14 np0005479822 python3.9[94733]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:50:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:15 np0005479822 python3.9[94818]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 05:50:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004160 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:18 np0005479822 python3.9[94972]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:50:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:20.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:20 np0005479822 python3.9[95126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:50:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:21 np0005479822 python3.9[95280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:22.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:22 np0005479822 python3.9[95433]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 05:50:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:23 np0005479822 python3.9[95584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:24.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:25 np0005479822 python3.9[95742]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:50:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:27 np0005479822 python3.9[95896]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:50:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:29 np0005479822 python3.9[96184]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 05:50:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:30 np0005479822 python3.9[96335]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:50:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:50:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:50:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:31 np0005479822 python3.9[96514]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:50:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:31.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001480 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:33 np0005479822 python3.9[96668]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:50:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:50:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:50:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:35 np0005479822 python3.9[96822]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:50:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001480 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:36.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:36 np0005479822 python3.9[96977]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 10 05:50:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:37 np0005479822 systemd[1]: session-40.scope: Deactivated successfully.
Oct 10 05:50:37 np0005479822 systemd[1]: session-40.scope: Consumed 19.894s CPU time.
Oct 10 05:50:37 np0005479822 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Oct 10 05:50:37 np0005479822 systemd-logind[789]: Removed session 40.
Oct 10 05:50:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:38.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:40.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780041a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095041 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:50:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:43 np0005479822 systemd-logind[789]: New session 41 of user zuul.
Oct 10 05:50:43 np0005479822 systemd[1]: Started Session 41 of User zuul.
Oct 10 05:50:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3840014c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780041c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:44 np0005479822 python3.9[97159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:45 np0005479822 python3.9[97315]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:50:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:46 np0005479822 python3.9[97508]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:50:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:47 np0005479822 systemd[1]: session-41.scope: Deactivated successfully.
Oct 10 05:50:47 np0005479822 systemd[1]: session-41.scope: Consumed 2.624s CPU time.
Oct 10 05:50:47 np0005479822 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Oct 10 05:50:47 np0005479822 systemd-logind[789]: Removed session 41.
Oct 10 05:50:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:48.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004270 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:49.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:50:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:53 np0005479822 systemd-logind[789]: New session 42 of user zuul.
Oct 10 05:50:53 np0005479822 systemd[1]: Started Session 42 of User zuul.
Oct 10 05:50:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:50:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:50:54 np0005479822 python3.9[97767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:55 np0005479822 python3.9[97952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:50:55 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:50:55 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:50:55 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:50:55 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:50:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:50:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:50:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:50:56 np0005479822 python3.9[98109]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:50:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:50:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:50:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:57 np0005479822 python3.9[98193]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:50:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:50:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:50:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:59 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:50:59 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:50:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:50:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:50:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:50:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:00 np0005479822 python3.9[98373]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:51:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:01 np0005479822 systemd-logind[789]: Session 20 logged out. Waiting for processes to exit.
Oct 10 05:51:01 np0005479822 systemd[1]: session-20.scope: Deactivated successfully.
Oct 10 05:51:01 np0005479822 systemd[1]: session-20.scope: Consumed 10.156s CPU time.
Oct 10 05:51:01 np0005479822 systemd-logind[789]: Removed session 20.
Oct 10 05:51:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:01 np0005479822 python3.9[98569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:01.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:02 np0005479822 python3.9[98721]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:51:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:03 np0005479822 python3.9[98887]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095103 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:51:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:04 np0005479822 python3.9[98965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:04.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:04 np0005479822 python3.9[99117]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:05 np0005479822 kernel: ganesha.nfsd[91239]: segfault at 50 ip 00007fa45499c32e sp 00007fa40bffe210 error 4 in libntirpc.so.5.8[7fa454981000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 05:51:05 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 05:51:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy ignored for local
Oct 10 05:51:05 np0005479822 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 10 05:51:05 np0005479822 systemd[1]: Started Process Core Dump (PID 99130/UID 0).
Oct 10 05:51:05 np0005479822 python3.9[99198]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:06 np0005479822 systemd-coredump[99144]: Process 85245 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 65:#012#0  0x00007fa45499c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 05:51:06 np0005479822 systemd[1]: systemd-coredump@0-99130-0.service: Deactivated successfully.
Oct 10 05:51:06 np0005479822 python3.9[99350]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:06 np0005479822 systemd[1]: systemd-coredump@0-99130-0.service: Consumed 1.240s CPU time.
Oct 10 05:51:06 np0005479822 podman[99355]: 2025-10-10 09:51:06.421970769 +0000 UTC m=+0.030280238 container died 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:51:06 np0005479822 systemd[1]: var-lib-containers-storage-overlay-32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2-merged.mount: Deactivated successfully.
Oct 10 05:51:06 np0005479822 podman[99355]: 2025-10-10 09:51:06.496815917 +0000 UTC m=+0.105125336 container remove 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 05:51:06 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 05:51:06 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 05:51:06 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.356s CPU time.
Oct 10 05:51:07 np0005479822 python3.9[99549]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:07 np0005479822 python3.9[99702]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:08 np0005479822 python3.9[99854]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:09 np0005479822 python3.9[100007]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:51:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095111 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:51:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:12 np0005479822 python3.9[100186]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:51:13 np0005479822 python3.9[100340]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:51:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:14 np0005479822 python3.9[100493]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:51:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:15 np0005479822 python3.9[100645]: ansible-service_facts Invoked
Oct 10 05:51:15 np0005479822 network[100663]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:51:15 np0005479822 network[100664]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:51:15 np0005479822 network[100665]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:51:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:16 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 1.
Oct 10 05:51:16 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:51:16 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.356s CPU time.
Oct 10 05:51:16 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:51:17 np0005479822 podman[100752]: 2025-10-10 09:51:17.024148902 +0000 UTC m=+0.049651477 container create 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:51:17 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 05:51:17 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:51:17 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:51:17 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:51:17 np0005479822 podman[100752]: 2025-10-10 09:51:17.095900216 +0000 UTC m=+0.121402881 container init 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 05:51:17 np0005479822 podman[100752]: 2025-10-10 09:51:17.004231544 +0000 UTC m=+0.029734149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:51:17 np0005479822 podman[100752]: 2025-10-10 09:51:17.105685495 +0000 UTC m=+0.131188090 container start 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 05:51:17 np0005479822 bash[100752]: 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed
Oct 10 05:51:17 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 05:51:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 05:51:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:51:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:18.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:21 np0005479822 python3.9[101225]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:51:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:51:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:51:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:24 np0005479822 python3.9[101380]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 05:51:25 np0005479822 python3.9[101533]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:26 np0005479822 python3.9[101611]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:27 np0005479822 python3.9[101763]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:27 np0005479822 python3.9[101842]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:28.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:29 np0005479822 python3.9[101995]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:29.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:30.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:31 np0005479822 python3.9[102163]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:51:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:32 np0005479822 python3.9[102273]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:51:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095133 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:51:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:33 np0005479822 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Oct 10 05:51:33 np0005479822 systemd[1]: session-42.scope: Deactivated successfully.
Oct 10 05:51:33 np0005479822 systemd[1]: session-42.scope: Consumed 28.162s CPU time.
Oct 10 05:51:33 np0005479822 systemd-logind[789]: Removed session 42.
Oct 10 05:51:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:34.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:37.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:38 np0005479822 systemd-logind[789]: New session 43 of user zuul.
Oct 10 05:51:39 np0005479822 systemd[1]: Started Session 43 of User zuul.
Oct 10 05:51:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:39 np0005479822 python3.9[102459]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:40.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095140 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:51:40 np0005479822 python3.9[102611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:41 np0005479822 python3.9[102689]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:41 np0005479822 systemd[1]: session-43.scope: Deactivated successfully.
Oct 10 05:51:41 np0005479822 systemd[1]: session-43.scope: Consumed 2.030s CPU time.
Oct 10 05:51:41 np0005479822 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Oct 10 05:51:41 np0005479822 systemd-logind[789]: Removed session 43.
Oct 10 05:51:41 np0005479822 systemd[82226]: Created slice User Background Tasks Slice.
Oct 10 05:51:41 np0005479822 systemd[82226]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 05:51:41 np0005479822 systemd[82226]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 05:51:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:43.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:45.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:51:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:51:46 np0005479822 systemd-logind[789]: New session 44 of user zuul.
Oct 10 05:51:46 np0005479822 systemd[1]: Started Session 44 of User zuul.
Oct 10 05:51:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:47 np0005479822 python3.9[102873]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:51:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:49 np0005479822 python3.9[103029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:50 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:51:50 np0005479822 python3.9[103205]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:50 np0005479822 python3.9[103283]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.4020x0r0 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:51 np0005479822 python3.9[103461]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:52 np0005479822 python3.9[103539]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.0d8o8r3m recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:51:53 np0005479822 python3.9[103691]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:53.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:53 np0005479822 python3.9[103844]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:54 np0005479822 python3.9[103922]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:55 np0005479822 python3.9[104074]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:55 np0005479822 python3.9[104153]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:51:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:56 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:51:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:56 np0005479822 python3.9[104305]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:51:57 np0005479822 python3.9[104457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:57 np0005479822 python3.9[104536]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:51:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:51:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:51:58 np0005479822 python3.9[104688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:51:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:59 np0005479822 python3.9[104766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:51:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:51:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:51:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:51:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:00.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:00 np0005479822 python3.9[104969]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:52:00 np0005479822 systemd[1]: Reloading.
Oct 10 05:52:00 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:52:00 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:52:00 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:52:00 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:52:00 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:52:00 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:52:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:01 np0005479822 python3.9[105192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:01.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:02 np0005479822 python3.9[105270]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095202 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:52:02 np0005479822 python3.9[105422]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:03 np0005479822 python3.9[105500]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:04 np0005479822 python3.9[105653]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:52:04 np0005479822 systemd[1]: Reloading.
Oct 10 05:52:04 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:52:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:04 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:52:04 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 05:52:04 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 05:52:04 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 05:52:04 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 05:52:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:05 np0005479822 python3.9[105847]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:52:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:05 np0005479822 network[105864]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:52:05 np0005479822 network[105865]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:52:05 np0005479822 network[105866]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:52:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:05.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:52:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:52:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:07.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:10 np0005479822 python3.9[106158]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:11 np0005479822 python3.9[106236]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:12 np0005479822 python3.9[106414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:12 np0005479822 python3.9[106566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:13 np0005479822 python3.9[106644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:14.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:14 np0005479822 python3.9[106797]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 05:52:14 np0005479822 systemd[1]: Starting Time & Date Service...
Oct 10 05:52:14 np0005479822 systemd[1]: Started Time & Date Service.
Oct 10 05:52:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:15 np0005479822 python3.9[106954]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:16.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:16 np0005479822 python3.9[107106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:17 np0005479822 python3.9[107184]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:17 np0005479822 python3.9[107337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:18 np0005479822 python3.9[107415]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5vj5rwhm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:19 np0005479822 python3.9[107567]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:19 np0005479822 python3.9[107646]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:20 np0005479822 python3.9[107798]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:52:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:21 np0005479822 python3[107952]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 05:52:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:22 np0005479822 python3.9[108104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:23 np0005479822 python3.9[108182]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:23 np0005479822 python3.9[108335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:24 np0005479822 python3.9[108413]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:25 np0005479822 python3.9[108565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:25 np0005479822 python3.9[108644]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:26.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:26 np0005479822 python3.9[108796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:27 np0005479822 python3.9[108874]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:28.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:28 np0005479822 python3.9[109027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:28 np0005479822 python3.9[109105]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:29 np0005479822 python3.9[109258]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:52:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:30 np0005479822 python3.9[109413]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:31 np0005479822 python3.9[109579]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:32 np0005479822 python3.9[109744]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:33 np0005479822 python3.9[109896]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 05:52:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:33 np0005479822 python3.9[110050]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 05:52:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:34 np0005479822 systemd[1]: session-44.scope: Deactivated successfully.
Oct 10 05:52:34 np0005479822 systemd[1]: session-44.scope: Consumed 37.359s CPU time.
Oct 10 05:52:34 np0005479822 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Oct 10 05:52:34 np0005479822 systemd-logind[789]: Removed session 44.
Oct 10 05:52:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:36.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:38.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:40 np0005479822 systemd-logind[789]: New session 45 of user zuul.
Oct 10 05:52:40 np0005479822 systemd[1]: Started Session 45 of User zuul.
Oct 10 05:52:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:41 np0005479822 python3.9[110234]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 05:52:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:42 np0005479822 python3.9[110387]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:52:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:42 np0005479822 python3.9[110541]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 10 05:52:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:43 np0005479822 python3.9[110694]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.81qdh614 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:52:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:44 np0005479822 python3.9[110819]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.81qdh614 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089963.1523638-103-112019410823580/.source.81qdh614 _original_basename=.5g2hjn5u follow=False checksum=2d908d3ce99ab235b2c2751c9a38992c3c685672 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:44 np0005479822 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 05:52:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:45 np0005479822 python3.9[110974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:52:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:46.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:46 np0005479822 python3.9[111126]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=#012 create=True mode=0644 path=/tmp/ansible.81qdh614 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:47 np0005479822 python3.9[111279]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.81qdh614' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:52:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:48 np0005479822 python3.9[111433]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.81qdh614 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:52:49 np0005479822 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Oct 10 05:52:49 np0005479822 systemd[1]: session-45.scope: Deactivated successfully.
Oct 10 05:52:49 np0005479822 systemd[1]: session-45.scope: Consumed 6.170s CPU time.
Oct 10 05:52:49 np0005479822 systemd-logind[789]: Removed session 45.
Oct 10 05:52:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:52:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:52:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:52.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:54 np0005479822 systemd-logind[789]: New session 46 of user zuul.
Oct 10 05:52:54 np0005479822 systemd[1]: Started Session 46 of User zuul.
Oct 10 05:52:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:52:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:52:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:55 np0005479822 python3.9[111640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:52:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:52:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:56.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:52:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:56 np0005479822 python3.9[111797]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 05:52:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:52:57 np0005479822 python3.9[111951]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 05:52:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:52:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:52:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:58.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:52:58 np0005479822 python3.9[112105]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.568489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978568559, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 251, "total_data_size": 6235914, "memory_usage": 6302056, "flush_reason": "Manual Compaction"}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978584828, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2509325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10700, "largest_seqno": 12844, "table_properties": {"data_size": 2503062, "index_size": 3206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15654, "raw_average_key_size": 20, "raw_value_size": 2489405, "raw_average_value_size": 3195, "num_data_blocks": 143, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089776, "oldest_key_time": 1760089776, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 16408 microseconds, and 10226 cpu microseconds.
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584900) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2509325 bytes OK
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584929) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586601) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586624) EVENT_LOG_v1 {"time_micros": 1760089978586616, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6226380, prev total WAL file size 6226380, number of live WAL files 2.
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.589050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2450KB)], [21(12MB)]
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978589075, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 15935412, "oldest_snapshot_seqno": -1}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4434 keys, 14286946 bytes, temperature: kUnknown
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978650199, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14286946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14253052, "index_size": 21688, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111770, "raw_average_key_size": 25, "raw_value_size": 14167996, "raw_average_value_size": 3195, "num_data_blocks": 932, "num_entries": 4434, "num_filter_entries": 4434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.650586) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14286946 bytes
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.652197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.3 rd, 233.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.7) OK, records in: 4857, records dropped: 423 output_compression: NoCompression
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.652236) EVENT_LOG_v1 {"time_micros": 1760089978652218, "job": 10, "event": "compaction_finished", "compaction_time_micros": 61229, "compaction_time_cpu_micros": 27540, "output_level": 6, "num_output_files": 1, "total_output_size": 14286946, "num_input_records": 4857, "num_output_records": 4434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978653318, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978657735, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.589014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:58 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:52:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:59 np0005479822 python3.9[112259]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:52:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00044b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:52:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:00 np0005479822 python3.9[112411]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:00 np0005479822 systemd[1]: session-46.scope: Deactivated successfully.
Oct 10 05:53:00 np0005479822 systemd[1]: session-46.scope: Consumed 5.007s CPU time.
Oct 10 05:53:00 np0005479822 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Oct 10 05:53:00 np0005479822 systemd-logind[789]: Removed session 46.
Oct 10 05:53:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00044d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:04.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095306 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:53:06 np0005479822 systemd-logind[789]: New session 47 of user zuul.
Oct 10 05:53:06 np0005479822 systemd[1]: Started Session 47 of User zuul.
Oct 10 05:53:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:07 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:53:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:53:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:53:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:53:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:53:07 np0005479822 python3.9[112675]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:53:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:09 np0005479822 python3.9[112832]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:53:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:09 np0005479822 python3.9[112917]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 05:53:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:11 np0005479822 python3.9[113094]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:53:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:13 np0005479822 python3.9[113270]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 05:53:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:53:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:53:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:14 np0005479822 python3.9[113421]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:53:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.966981) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994967016, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 429, "num_deletes": 251, "total_data_size": 564009, "memory_usage": 572648, "flush_reason": "Manual Compaction"}
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994971404, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 372836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12849, "largest_seqno": 13273, "table_properties": {"data_size": 370373, "index_size": 563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5935, "raw_average_key_size": 18, "raw_value_size": 365412, "raw_average_value_size": 1131, "num_data_blocks": 24, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089979, "oldest_key_time": 1760089979, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4466 microseconds, and 2228 cpu microseconds.
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.971445) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 372836 bytes OK
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.971467) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973067) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973082) EVENT_LOG_v1 {"time_micros": 1760089994973077, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 561283, prev total WAL file size 561283, number of live WAL files 2.
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(364KB)], [24(13MB)]
Oct 10 05:53:14 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994973725, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14659782, "oldest_snapshot_seqno": -1}
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4242 keys, 12701680 bytes, temperature: kUnknown
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995032582, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12701680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12670785, "index_size": 19201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108719, "raw_average_key_size": 25, "raw_value_size": 12590731, "raw_average_value_size": 2968, "num_data_blocks": 813, "num_entries": 4242, "num_filter_entries": 4242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.032851) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12701680 bytes
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.033877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.7 rd, 215.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(73.4) write-amplify(34.1) OK, records in: 4757, records dropped: 515 output_compression: NoCompression
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.033910) EVENT_LOG_v1 {"time_micros": 1760089995033896, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58943, "compaction_time_cpu_micros": 37295, "output_level": 6, "num_output_files": 1, "total_output_size": 12701680, "num_input_records": 4757, "num_output_records": 4242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995034161, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995039224, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 05:53:15 np0005479822 python3.9[113571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:53:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:15 np0005479822 systemd[1]: session-47.scope: Deactivated successfully.
Oct 10 05:53:15 np0005479822 systemd[1]: session-47.scope: Consumed 6.729s CPU time.
Oct 10 05:53:15 np0005479822 systemd-logind[789]: Session 47 logged out. Waiting for processes to exit.
Oct 10 05:53:15 np0005479822 systemd-logind[789]: Removed session 47.
Oct 10 05:53:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:53:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:16.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:18.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:18.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:18 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:53:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:18 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:53:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:20 np0005479822 systemd-logind[789]: New session 48 of user zuul.
Oct 10 05:53:20 np0005479822 systemd[1]: Started Session 48 of User zuul.
Oct 10 05:53:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:21 np0005479822 python3.9[113753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:53:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:53:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:22.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:23 np0005479822 python3.9[113909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:24.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:24 np0005479822 python3.9[114062]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:25 np0005479822 python3.9[114214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:25 np0005479822 python3.9[114338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090004.404119-159-218216821496914/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=fc301ff04c1bdbf67ce21f61b2409e6eab9f5113 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:26 np0005479822 python3.9[114490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:27 np0005479822 python3.9[114613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090005.944469-159-266128578410019/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=6d432417c0c3c485924638569c72973f4b3272fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:28 np0005479822 python3.9[114766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095328 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:53:28 np0005479822 python3.9[114889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090007.5264344-159-1861374170953/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=0788f60270857301d82728379b3c6f1e054161c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:29 np0005479822 python3.9[115042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:30 np0005479822 python3.9[115194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:31 np0005479822 python3.9[115347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:32 np0005479822 python3.9[115495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090011.0119429-355-29036934832909/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=02480a739908564efbb8591bd6a1d73205710dc7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:33 np0005479822 python3.9[115647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:33 np0005479822 python3.9[115771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090012.4455447-355-55450022385589/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:34 np0005479822 python3.9[115923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:34.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:35 np0005479822 python3.9[116046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090013.907635-355-141073060896331/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=f29b3f60c4947f05538559980518c0fcc28c88a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:36 np0005479822 python3.9[116200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:53:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:53:36 np0005479822 python3.9[116352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:37 np0005479822 python3.9[116506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:38 np0005479822 python3.9[116629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090017.0920355-553-276572821215870/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=2eaad5181c478c56c6664f5d92519151a29ae939 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:39 np0005479822 python3.9[116781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:39 np0005479822 python3.9[116905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090018.5802143-553-76385888343045/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:40 np0005479822 python3.9[117057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:41 np0005479822 python3.9[117180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090019.9343321-553-230285406189437/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=ad198db31845dce8dbb361567f3eab9b32ae6934 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:42 np0005479822 python3.9[117333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:43 np0005479822 python3.9[117485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:43 np0005479822 python3.9[117609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090022.680377-781-79825467494619/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:44 np0005479822 python3.9[117761]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:45 np0005479822 python3.9[117914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:46.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:46 np0005479822 python3.9[118037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090025.0293465-856-113054238319818/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:46.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:47 np0005479822 python3.9[118189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:47 np0005479822 python3.9[118342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:48 np0005479822 python3.9[118465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090027.4078746-938-121994815636879/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:49 np0005479822 python3.9[118617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:50 np0005479822 python3.9[118770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:50 np0005479822 python3.9[118893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090029.5289757-1013-185673998754397/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:51 np0005479822 python3.9[119046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:52 np0005479822 python3.9[119223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:53 np0005479822 python3.9[119346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090032.0309086-1069-156904704374440/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:54 np0005479822 python3.9[119499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:53:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:54.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:55 np0005479822 python3.9[119653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:53:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:55 np0005479822 python3.9[119777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090034.4942918-1103-172224008197639/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:53:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:56 np0005479822 systemd[1]: session-48.scope: Deactivated successfully.
Oct 10 05:53:56 np0005479822 systemd[1]: session-48.scope: Consumed 28.468s CPU time.
Oct 10 05:53:56 np0005479822 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Oct 10 05:53:56 np0005479822 systemd-logind[789]: Removed session 48.
Oct 10 05:53:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:53:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:53:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:58.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:53:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:53:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:53:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:53:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:53:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:00.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:02 np0005479822 systemd-logind[789]: New session 49 of user zuul.
Oct 10 05:54:02 np0005479822 systemd[1]: Started Session 49 of User zuul.
Oct 10 05:54:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:03 np0005479822 python3.9[119960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:04 np0005479822 python3.9[120113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:04.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:04 np0005479822 python3.9[120236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090043.2978232-63-90547915940566/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f4f20d3bcbb08befb7837fd0e595f186c33a7cc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:05 np0005479822 python3.9[120389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:06.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:06 np0005479822 python3.9[120512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090045.0966501-63-277377550083821/.source.conf _original_basename=ceph.conf follow=False checksum=1a4b9adde8f120db415fb0ad56382b109e0fedc1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:06.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:06 np0005479822 systemd[1]: session-49.scope: Deactivated successfully.
Oct 10 05:54:06 np0005479822 systemd[1]: session-49.scope: Consumed 3.404s CPU time.
Oct 10 05:54:06 np0005479822 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Oct 10 05:54:06 np0005479822 systemd-logind[789]: Removed session 49.
Oct 10 05:54:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 05:54:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 05:54:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880014b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:10.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:12 np0005479822 systemd-logind[789]: New session 50 of user zuul.
Oct 10 05:54:12 np0005479822 systemd[1]: Started Session 50 of User zuul.
Oct 10 05:54:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:13 np0005479822 python3.9[120720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:54:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:14 np0005479822 python3.9[121012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:54:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:54:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:14 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:54:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:15 np0005479822 python3.9[121181]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:54:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095415 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:54:16 np0005479822 python3.9[121332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:54:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:17 np0005479822 python3.9[121484]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 05:54:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b6c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:18.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:19 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 10 05:54:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:20 np0005479822 python3.9[121642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:54:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:20.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:54:20 np0005479822 python3.9[121751]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:54:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004250 fd 39 proxy ignored for local
Oct 10 05:54:21 np0005479822 kernel: ganesha.nfsd[102000]: segfault at 50 ip 00007f6c56b7e32e sp 00007f6c2cff8210 error 4 in libntirpc.so.5.8[7f6c56b63000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 05:54:21 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 05:54:21 np0005479822 systemd[1]: Started Process Core Dump (PID 121753/UID 0).
Oct 10 05:54:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:22 np0005479822 systemd-coredump[121754]: Process 100776 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007f6c56b7e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 05:54:22 np0005479822 systemd[1]: systemd-coredump@1-121753-0.service: Deactivated successfully.
Oct 10 05:54:22 np0005479822 systemd[1]: systemd-coredump@1-121753-0.service: Consumed 1.226s CPU time.
Oct 10 05:54:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:22 np0005479822 podman[121815]: 2025-10-10 09:54:22.570694677 +0000 UTC m=+0.048686688 container died 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 05:54:22 np0005479822 systemd[1]: var-lib-containers-storage-overlay-3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac-merged.mount: Deactivated successfully.
Oct 10 05:54:22 np0005479822 podman[121815]: 2025-10-10 09:54:22.62767687 +0000 UTC m=+0.105668841 container remove 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:54:22 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 05:54:22 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 05:54:22 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.051s CPU time.
Oct 10 05:54:23 np0005479822 python3.9[121955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:54:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:24 np0005479822 python3[122111]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 10 05:54:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:25 np0005479822 python3.9[122263]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:26 np0005479822 python3.9[122416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:27 np0005479822 python3.9[122494]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095427 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:54:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:27 np0005479822 python3.9[122647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:28 np0005479822 python3.9[122725]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.60z2caf3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:29 np0005479822 python3.9[122877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:29 np0005479822 python3.9[122956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:30.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:30.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:30 np0005479822 python3.9[123108]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:31 np0005479822 python3[123262]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 05:54:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:32.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:32 np0005479822 python3.9[123439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:32 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 2.
Oct 10 05:54:32 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:54:32 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.051s CPU time.
Oct 10 05:54:32 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:54:33 np0005479822 podman[123606]: 2025-10-10 09:54:33.225500185 +0000 UTC m=+0.071504642 container create 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:54:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 05:54:33 np0005479822 podman[123606]: 2025-10-10 09:54:33.19619715 +0000 UTC m=+0.042201627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:54:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:54:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:54:33 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:54:33 np0005479822 podman[123606]: 2025-10-10 09:54:33.314298525 +0000 UTC m=+0.160303042 container init 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 05:54:33 np0005479822 podman[123606]: 2025-10-10 09:54:33.327752229 +0000 UTC m=+0.173756696 container start 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 05:54:33 np0005479822 bash[123606]: 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 05:54:33 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:54:33 np0005479822 python3.9[123610]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090072.0973797-432-219297481913025/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 05:54:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:54:34 np0005479822 python3.9[123819]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:35 np0005479822 python3.9[123944]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090073.6178687-477-124230480474413/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:35 np0005479822 python3.9[124097]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:36.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:36 np0005479822 python3.9[124222]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090075.270339-522-219546861891349/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:36.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:37 np0005479822 python3.9[124374]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:38 np0005479822 python3.9[124500]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090076.79518-567-118076608929673/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:38.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:39 np0005479822 python3.9[124652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:54:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:54:39 np0005479822 python3.9[124778]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090078.3927078-612-258648593591799/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:40.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:40 np0005479822 python3.9[124930]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:41 np0005479822 python3.9[125083]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095441 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:54:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:42.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:42 np0005479822 python3.9[125238]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:43 np0005479822 python3.9[125391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:44 np0005479822 python3.9[125544]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:54:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:45 np0005479822 python3.9[125698]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000b:nfs.cephfs.0: -2
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:45 np0005479822 python3.9[125869]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:47 np0005479822 python3.9[126019]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:54:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:48.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:48 np0005479822 python3.9[126173]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:48 np0005479822 ovs-vsctl[126174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 10 05:54:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:48.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095449 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:54:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:49 np0005479822 python3.9[126326]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:50 np0005479822 python3.9[126482]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:54:50 np0005479822 ovs-vsctl[126483]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 10 05:54:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:50.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:51 np0005479822 python3.9[126633]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:54:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:51 np0005479822 python3.9[126788]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:54:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:52.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:52.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:52 np0005479822 python3.9[126965]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:53 np0005479822 python3.9[127043]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:54:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:53 np0005479822 python3.9[127196]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:54 np0005479822 python3.9[127274]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:54:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 05:54:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s#012Interval WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 05:54:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:55 np0005479822 python3.9[127426]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:55 np0005479822 python3.9[127579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:54:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:54:56 np0005479822 python3.9[127657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:56.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:57 np0005479822 python3.9[127809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:54:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:57 np0005479822 python3.9[127888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:54:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:54:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:54:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:54:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:54:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:54:58 np0005479822 python3.9[128040]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:54:58 np0005479822 systemd[1]: Reloading.
Oct 10 05:54:58 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:54:58 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:54:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:54:59 np0005479822 python3.9[128231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:54:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:00.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:00 np0005479822 python3.9[128309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:01 np0005479822 python3.9[128461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:01 np0005479822 python3.9[128540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:02.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:02 np0005479822 python3.9[128692]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:55:02 np0005479822 systemd[1]: Reloading.
Oct 10 05:55:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:02.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:02 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:55:02 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:55:02 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 05:55:02 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 05:55:02 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 05:55:02 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 05:55:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:03 np0005479822 python3.9[128888]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:04 np0005479822 python3.9[129040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:05 np0005479822 python3.9[129164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090104.065703-1365-41433418606351/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:06.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:06 np0005479822 python3.9[129316]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:06.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:07 np0005479822 python3.9[129468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:08 np0005479822 python3.9[129592]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090106.7524204-1440-76328038016746/.source.json _original_basename=.f_hkh9hg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:08.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:08 np0005479822 python3.9[129744]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:10.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:10.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:11 np0005479822 python3.9[130172]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 10 05:55:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:12 np0005479822 python3.9[130350]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 05:55:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:13 np0005479822 python3.9[130503]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 05:55:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:15 np0005479822 python3[130682]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 05:55:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:20.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:21 np0005479822 podman[130698]: 2025-10-10 09:55:21.066253259 +0000 UTC m=+5.516159970 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 05:55:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:21 np0005479822 podman[130889]: 2025-10-10 09:55:21.209491355 +0000 UTC m=+0.049747041 container create bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 05:55:21 np0005479822 podman[130889]: 2025-10-10 09:55:21.18460686 +0000 UTC m=+0.024862536 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 05:55:21 np0005479822 python3[130682]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:55:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 05:55:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2459 writes, 14K keys, 2459 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2459 writes, 2459 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2459 writes, 14K keys, 2459 commit groups, 1.0 writes per commit group, ingest: 37.91 MB, 0.06 MB/s#012Interval WAL: 2459 writes, 2459 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    141.5      0.15              0.08         6    0.024       0      0       0.0       0.0#012  L6      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    206.8    181.3      0.33              0.17         5    0.066     21K   2259       0.0       0.0#012 Sum      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    143.8    169.2      0.48              0.25        11    0.043     21K   2259       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    144.6    170.1      0.48              0.25        10    0.048     21K   2259       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    206.8    181.3      0.33              0.17         5    0.066     21K   2259       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    143.9      0.14              0.08         5    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 2.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(170,2.40 MB,0.787956%) FilterBlock(11,69.05 KB,0.0221805%) IndexBlock(11,132.45 KB,0.0425489%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 05:55:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:23 np0005479822 python3.9[131094]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:55:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:24 np0005479822 python3.9[131249]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:24 np0005479822 python3.9[131325]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:55:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:25 np0005479822 python3.9[131476]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090124.7619548-1704-75025744652160/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:55:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:26 np0005479822 python3.9[131553]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 05:55:26 np0005479822 systemd[1]: Reloading.
Oct 10 05:55:26 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:55:26 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:55:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:27 np0005479822 python3.9[131690]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:55:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:27 np0005479822 systemd[1]: Reloading.
Oct 10 05:55:27 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:55:27 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:55:27 np0005479822 systemd[1]: Starting ovn_controller container...
Oct 10 05:55:27 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:27 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:55:27 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:55:27 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c985298b94bb2ac08e4e80495a89deaa1110af6f6b90fce21a195bfa4aca6f9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 10 05:55:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:27 np0005479822 systemd[1]: Started /usr/bin/podman healthcheck run bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7.
Oct 10 05:55:27 np0005479822 podman[131733]: 2025-10-10 09:55:27.804601942 +0000 UTC m=+0.191218710 container init bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 10 05:55:27 np0005479822 ovn_controller[131749]: + sudo -E kolla_set_configs
Oct 10 05:55:27 np0005479822 podman[131733]: 2025-10-10 09:55:27.845993525 +0000 UTC m=+0.232610243 container start bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct 10 05:55:27 np0005479822 edpm-start-podman-container[131733]: ovn_controller
Oct 10 05:55:27 np0005479822 systemd[1]: Created slice User Slice of UID 0.
Oct 10 05:55:27 np0005479822 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 05:55:27 np0005479822 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 05:55:27 np0005479822 systemd[1]: Starting User Manager for UID 0...
Oct 10 05:55:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:27 np0005479822 edpm-start-podman-container[131732]: Creating additional drop-in dependency for "ovn_controller" (bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7)
Oct 10 05:55:28 np0005479822 podman[131756]: 2025-10-10 09:55:28.0042878 +0000 UTC m=+0.141456919 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 05:55:28 np0005479822 systemd[1]: bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7-57f2736abf5f19a4.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 05:55:28 np0005479822 systemd[1]: bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7-57f2736abf5f19a4.service: Failed with result 'exit-code'.
Oct 10 05:55:28 np0005479822 systemd[1]: Reloading.
Oct 10 05:55:28 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:55:28 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:55:28 np0005479822 systemd[131781]: Queued start job for default target Main User Target.
Oct 10 05:55:28 np0005479822 systemd[131781]: Created slice User Application Slice.
Oct 10 05:55:28 np0005479822 systemd[131781]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 05:55:28 np0005479822 systemd[131781]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 05:55:28 np0005479822 systemd[131781]: Reached target Paths.
Oct 10 05:55:28 np0005479822 systemd[131781]: Reached target Timers.
Oct 10 05:55:28 np0005479822 systemd[131781]: Starting D-Bus User Message Bus Socket...
Oct 10 05:55:28 np0005479822 systemd[131781]: Starting Create User's Volatile Files and Directories...
Oct 10 05:55:28 np0005479822 systemd[131781]: Listening on D-Bus User Message Bus Socket.
Oct 10 05:55:28 np0005479822 systemd[131781]: Reached target Sockets.
Oct 10 05:55:28 np0005479822 systemd[131781]: Finished Create User's Volatile Files and Directories.
Oct 10 05:55:28 np0005479822 systemd[131781]: Reached target Basic System.
Oct 10 05:55:28 np0005479822 systemd[131781]: Reached target Main User Target.
Oct 10 05:55:28 np0005479822 systemd[131781]: Startup finished in 162ms.
Oct 10 05:55:28 np0005479822 systemd[1]: Started User Manager for UID 0.
Oct 10 05:55:28 np0005479822 systemd[1]: Started ovn_controller container.
Oct 10 05:55:28 np0005479822 systemd[1]: Started Session c1 of User root.
Oct 10 05:55:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: INFO:__main__:Validating config file
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: INFO:__main__:Writing out command to execute
Oct 10 05:55:28 np0005479822 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: ++ cat /run_command
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + ARGS=
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + sudo kolla_copy_cacerts
Oct 10 05:55:28 np0005479822 systemd[1]: Started Session c2 of User root.
Oct 10 05:55:28 np0005479822 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + [[ ! -n '' ]]
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + . kolla_extend_start
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + umask 0022
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5394] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5408] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5433] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5447] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5456] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 05:55:28 np0005479822 kernel: br-int: entered promiscuous mode
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00021|main|INFO|OVS feature set changed, force recompute.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00022|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00023|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00024|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5634] manager: (ovn-49146e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5639] manager: (ovn-a1a60c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5645] manager: (ovn-38ab03-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 10 05:55:28 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 05:55:28 np0005479822 kernel: genev_sys_6081: entered promiscuous mode
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5918] device (genev_sys_6081): carrier: link connected
Oct 10 05:55:28 np0005479822 NetworkManager[44982]: <info>  [1760090128.5921] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 10 05:55:28 np0005479822 systemd-udevd[131922]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:55:28 np0005479822 systemd-udevd[131926]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 05:55:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:28.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:29 np0005479822 python3.9[132015]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:55:29 np0005479822 ovs-vsctl[132016]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 10 05:55:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:29 np0005479822 python3.9[132169]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:55:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:29 np0005479822 ovs-vsctl[132171]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 10 05:55:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:31 np0005479822 python3.9[132324]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:55:31 np0005479822 ovs-vsctl[132325]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 10 05:55:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:31 np0005479822 systemd[1]: session-50.scope: Deactivated successfully.
Oct 10 05:55:31 np0005479822 systemd[1]: session-50.scope: Consumed 1min 7.487s CPU time.
Oct 10 05:55:31 np0005479822 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Oct 10 05:55:31 np0005479822 systemd-logind[789]: Removed session 50.
Oct 10 05:55:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:34.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:36 np0005479822 systemd-logind[789]: New session 52 of user zuul.
Oct 10 05:55:36 np0005479822 systemd[1]: Started Session 52 of User zuul.
Oct 10 05:55:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:37 np0005479822 python3.9[132532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:55:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:38 np0005479822 systemd[1]: Stopping User Manager for UID 0...
Oct 10 05:55:38 np0005479822 systemd[131781]: Activating special unit Exit the Session...
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped target Main User Target.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped target Basic System.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped target Paths.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped target Sockets.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped target Timers.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 05:55:38 np0005479822 systemd[131781]: Closed D-Bus User Message Bus Socket.
Oct 10 05:55:38 np0005479822 systemd[131781]: Stopped Create User's Volatile Files and Directories.
Oct 10 05:55:38 np0005479822 systemd[131781]: Removed slice User Application Slice.
Oct 10 05:55:38 np0005479822 systemd[131781]: Reached target Shutdown.
Oct 10 05:55:38 np0005479822 systemd[131781]: Finished Exit the Session.
Oct 10 05:55:38 np0005479822 systemd[131781]: Reached target Exit the Session.
Oct 10 05:55:38 np0005479822 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 05:55:38 np0005479822 systemd[1]: Stopped User Manager for UID 0.
Oct 10 05:55:38 np0005479822 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 05:55:38 np0005479822 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 05:55:38 np0005479822 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 05:55:38 np0005479822 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 05:55:38 np0005479822 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 05:55:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:39 np0005479822 python3.9[132690]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:39 np0005479822 python3.9[132843]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:40 np0005479822 python3.9[132995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:41 np0005479822 python3.9[133147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:42 np0005479822 python3.9[133300]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:43 np0005479822 python3.9[133450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:55:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:44 np0005479822 python3.9[133603]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 05:55:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:46 np0005479822 python3.9[133754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:47 np0005479822 python3.9[133876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090145.417774-219-211815174147707/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:47 np0005479822 python3.9[134027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:48 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:48 np0005479822 python3.9[134148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090147.3173375-264-139570385810756/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:49 np0005479822 python3.9[134301]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:55:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:50 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:50 np0005479822 python3.9[134385]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:55:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:50.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:52 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:53 np0005479822 python3.9[134566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:55:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0011f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:54 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:55:54 np0005479822 python3.9[134720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:54.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:54 np0005479822 python3.9[134841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090153.5146852-375-40591609089866/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:55:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:55:55 np0005479822 kernel: ganesha.nfsd[134388]: segfault at 50 ip 00007fb497e7332e sp 00007fb461ffa210 error 4 in libntirpc.so.5.8[7fb497e58000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 05:55:55 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 05:55:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8001320 fd 38 proxy ignored for local
Oct 10 05:55:55 np0005479822 systemd[1]: Started Process Core Dump (PID 134986/UID 0).
Oct 10 05:55:55 np0005479822 python3.9[134992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:56 np0005479822 python3.9[135115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090154.8828712-375-250302573173699/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:56 np0005479822 systemd-coredump[134993]: Process 123630 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 59:#012#0  0x00007fb497e7332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 05:55:56 np0005479822 systemd[1]: systemd-coredump@2-134986-0.service: Deactivated successfully.
Oct 10 05:55:56 np0005479822 systemd[1]: systemd-coredump@2-134986-0.service: Consumed 1.258s CPU time.
Oct 10 05:55:56 np0005479822 podman[135144]: 2025-10-10 09:55:56.652722644 +0000 UTC m=+0.047814201 container died 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 05:55:56 np0005479822 systemd[1]: var-lib-containers-storage-overlay-131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec-merged.mount: Deactivated successfully.
Oct 10 05:55:56 np0005479822 podman[135144]: 2025-10-10 09:55:56.698052926 +0000 UTC m=+0.093144453 container remove 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 05:55:56 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 05:55:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:56.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:56 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 05:55:56 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.918s CPU time.
Oct 10 05:55:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:55:57 np0005479822 python3.9[135314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:58 np0005479822 python3.9[135436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090156.8858407-507-240203052217470/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:55:58 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:58Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Oct 10 05:55:58 np0005479822 ovn_controller[131749]: 2025-10-10T09:55:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 10 05:55:58 np0005479822 podman[135437]: 2025-10-10 09:55:58.233561551 +0000 UTC m=+0.148160209 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 05:55:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:55:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:55:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:55:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:55:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:58.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:55:58 np0005479822 python3.9[135614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:55:59 np0005479822 python3.9[135735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090158.2549388-507-264385398988774/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:00 np0005479822 python3.9[135886]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:56:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:00.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:01 np0005479822 python3.9[136040]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095601 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:56:01 np0005479822 python3.9[136193]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:02 np0005479822 python3.9[136271]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:03 np0005479822 python3.9[136423]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:03 np0005479822 python3.9[136502]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:04 np0005479822 python3.9[136654]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:05 np0005479822 python3.9[136806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:05 np0005479822 python3.9[136885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:06 np0005479822 python3.9[137037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:06 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 3.
Oct 10 05:56:06 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:56:06 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.918s CPU time.
Oct 10 05:56:07 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 05:56:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:07 np0005479822 python3.9[137128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:07 np0005479822 podman[137161]: 2025-10-10 09:56:07.307424557 +0000 UTC m=+0.059680973 container create f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 10 05:56:07 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:07 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:07 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:07 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:07 np0005479822 podman[137161]: 2025-10-10 09:56:07.274892632 +0000 UTC m=+0.027149058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 05:56:07 np0005479822 podman[137161]: 2025-10-10 09:56:07.38367761 +0000 UTC m=+0.135934046 container init f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:56:07 np0005479822 podman[137161]: 2025-10-10 09:56:07.394445583 +0000 UTC m=+0.146701969 container start f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:56:07 np0005479822 bash[137161]: f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1
Oct 10 05:56:07 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 05:56:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:56:08 np0005479822 python3.9[137371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:08 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:08 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:08 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:08.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:09 np0005479822 python3.9[137560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:10 np0005479822 python3.9[137639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:10 np0005479822 python3.9[137791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:11 np0005479822 python3.9[137869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:12 np0005479822 python3.9[138022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:12 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:12 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:12 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:12 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 05:56:12 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 05:56:12 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 05:56:12 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 05:56:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:56:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:56:13 np0005479822 python3.9[138242]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:14.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:14 np0005479822 python3.9[138394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:14.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:15 np0005479822 python3.9[138517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090173.874504-960-74001828533149/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:16 np0005479822 python3.9[138670]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:56:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:17 np0005479822 python3.9[138822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:56:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:17 np0005479822 python3.9[138946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090176.4142663-1035-236382636447488/.source.json _original_basename=.ob3n8vt3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:18 np0005479822 python3.9[139098]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 05:56:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4940000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:20.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:21 np0005479822 python3.9[139541]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 10 05:56:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:22 np0005479822 python3.9[139694]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 05:56:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:23 np0005479822 python3.9[139846]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 05:56:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095623 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:56:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:25 np0005479822 python3[140025]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 05:56:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:26.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:28.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:30.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:33 np0005479822 podman[140157]: 2025-10-10 09:56:33.689829862 +0000 UTC m=+4.780948769 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 10 05:56:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:56:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:56:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:56:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:56:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:34 np0005479822 podman[140038]: 2025-10-10 09:56:34.14774498 +0000 UTC m=+8.951063868 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 05:56:34 np0005479822 podman[140306]: 2025-10-10 09:56:34.379441144 +0000 UTC m=+0.067695416 container create c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 05:56:34 np0005479822 podman[140306]: 2025-10-10 09:56:34.345999536 +0000 UTC m=+0.034253898 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 05:56:34 np0005479822 python3[140025]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 05:56:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:34.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:35 np0005479822 python3.9[140496]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:56:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:36 np0005479822 python3.9[140651]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:36 np0005479822 python3.9[140727]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 05:56:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:37 np0005479822 python3.9[140879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090196.8461943-1299-53411867460487/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:56:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:38 np0005479822 python3.9[140955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 05:56:38 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:38 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:38 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:39 np0005479822 python3.9[141091]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:39 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:39 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:39 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:56:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:56:39 np0005479822 systemd[1]: Starting ovn_metadata_agent container...
Oct 10 05:56:39 np0005479822 systemd[1]: Started libcrun container.
Oct 10 05:56:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81fc82675d96858f7a747d2ddbf15ae6cb8daca49b083b0fa4c06685d283a50/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81fc82675d96858f7a747d2ddbf15ae6cb8daca49b083b0fa4c06685d283a50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 05:56:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:40 np0005479822 systemd[1]: Started /usr/bin/podman healthcheck run c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51.
Oct 10 05:56:40 np0005479822 podman[141135]: 2025-10-10 09:56:40.205647321 +0000 UTC m=+0.577931075 container init c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + sudo -E kolla_set_configs
Oct 10 05:56:40 np0005479822 podman[141135]: 2025-10-10 09:56:40.24786238 +0000 UTC m=+0.620146154 container start c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:56:40 np0005479822 edpm-start-podman-container[141135]: ovn_metadata_agent
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Validating config file
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Copying service configuration files
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Writing out command to execute
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: ++ cat /run_command
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + CMD=neutron-ovn-metadata-agent
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + ARGS=
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + sudo kolla_copy_cacerts
Oct 10 05:56:40 np0005479822 podman[141158]: 2025-10-10 09:56:40.345637055 +0000 UTC m=+0.079718371 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 05:56:40 np0005479822 edpm-start-podman-container[141134]: Creating additional drop-in dependency for "ovn_metadata_agent" (c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51)
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + [[ ! -n '' ]]
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + . kolla_extend_start
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: Running command: 'neutron-ovn-metadata-agent'
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + umask 0022
Oct 10 05:56:40 np0005479822 ovn_metadata_agent[141151]: + exec neutron-ovn-metadata-agent
Oct 10 05:56:40 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:40 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:40 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:40 np0005479822 systemd[1]: Started ovn_metadata_agent container.
Oct 10 05:56:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:40.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:41 np0005479822 systemd[1]: session-52.scope: Deactivated successfully.
Oct 10 05:56:41 np0005479822 systemd[1]: session-52.scope: Consumed 1min 4.123s CPU time.
Oct 10 05:56:41 np0005479822 systemd-logind[789]: Session 52 logged out. Waiting for processes to exit.
Oct 10 05:56:41 np0005479822 systemd-logind[789]: Removed session 52.
Oct 10 05:56:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.189 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.189 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.201 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.203 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.217 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ee0899c1-415d-4aa8-abe8-1240b4e8bf2c (UUID: ee0899c1-415d-4aa8-abe8-1240b4e8bf2c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.242 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.249 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 10 05:56:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.254 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ee0899c1-415d-4aa8-abe8-1240b4e8bf2c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], external_ids={}, name=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, nb_cfg_timestamp=1760090136565, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.255 141156 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f9372ffcf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.260 141156 DEBUG oslo_service.service [-] Started child 141270 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.264 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmprcodsn0p/privsep.sock']#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.267 141270 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1938146'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.298 141270 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.299 141270 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.299 141270 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.302 141270 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.309 141270 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.316 141270 INFO eventlet.wsgi.server [-] (141270) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 10 05:56:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:42.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:42 np0005479822 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.938 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.938 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprcodsn0p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.812 141275 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.816 141275 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.818 141275 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.819 141275 INFO oslo.privsep.daemon [-] privsep daemon running as pid 141275#033[00m
Oct 10 05:56:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.941 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[40d37dff-bf20-4809-b174-a9fccb83d19a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 05:56:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 05:56:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.924 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[d61bbd96-24ca-4494-83a9-e7dd9b98b03f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.927 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, column=external_ids, values=({'neutron:ovn-metadata-id': 'f0896111-8589-5c53-9955-6cd3547e7998'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.944 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 05:56:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 05:56:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:56:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:56:46 np0005479822 systemd-logind[789]: New session 53 of user zuul.
Oct 10 05:56:46 np0005479822 systemd[1]: Started Session 53 of User zuul.
Oct 10 05:56:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:47 np0005479822 python3.9[141436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 05:56:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:48.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:49 np0005479822 python3.9[141592]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:56:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095650 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:56:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:50 np0005479822 python3.9[141758]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 05:56:50 np0005479822 systemd[1]: Reloading.
Oct 10 05:56:50 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:56:50 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:56:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:50.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:51 np0005479822 python3.9[141946]: ansible-ansible.builtin.service_facts Invoked
Oct 10 05:56:51 np0005479822 network[141963]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 05:56:51 np0005479822 network[141964]: 'network-scripts' will be removed from distribution in near future.
Oct 10 05:56:51 np0005479822 network[141965]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 05:56:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:52.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:54.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:56 np0005479822 python3.9[142257]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:56:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:56.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:56:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:56:57 np0005479822 python3.9[142411]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:56:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:56:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:56:58 np0005479822 python3.9[142564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:56:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:56:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:56:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:56:59 np0005479822 python3.9[142717]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:56:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:00 np0005479822 python3.9[142871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:57:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:00.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:01 np0005479822 python3.9[143024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:57:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:57:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:57:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:02 np0005479822 python3.9[143178]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 05:57:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:03 np0005479822 python3.9[143331]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:04 np0005479822 python3.9[143484]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:57:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:04 np0005479822 podman[143608]: 2025-10-10 09:57:04.840755151 +0000 UTC m=+0.114302489 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 05:57:04 np0005479822 python3.9[143653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:05 np0005479822 python3.9[143815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:06 np0005479822 python3.9[143967]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:06.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:07 np0005479822 python3.9[144119]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:08 np0005479822 python3.9[144272]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:08.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:09 np0005479822 python3.9[144424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:09 np0005479822 python3.9[144577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095710 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:57:10 np0005479822 podman[144729]: 2025-10-10 09:57:10.474035878 +0000 UTC m=+0.077173890 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 05:57:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:10 np0005479822 python3.9[144730]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:10.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:11 np0005479822 python3.9[144900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:12 np0005479822 python3.9[145053]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:12.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:12 np0005479822 python3.9[145205]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:13 np0005479822 python3.9[145383]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:57:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:14.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:14 np0005479822 python3.9[145535]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:14.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:15 np0005479822 python3.9[145687]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 05:57:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:16 np0005479822 python3.9[145840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 05:57:16 np0005479822 systemd[1]: Reloading.
Oct 10 05:57:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:16 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:57:16 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:57:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:16.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:17 np0005479822 python3.9[146028]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095718 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 05:57:18 np0005479822 python3.9[146181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:18.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:18.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:19 np0005479822 python3.9[146334]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:19 np0005479822 python3.9[146488]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:20.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:20 np0005479822 python3.9[146641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:20.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:21 np0005479822 python3.9[146794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:22 np0005479822 python3.9[146949]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 05:57:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:22.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:23 np0005479822 python3.9[147104]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 10 05:57:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:24.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:24 np0005479822 python3.9[147257]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 05:57:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:24.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:25 np0005479822 python3.9[147416]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 05:57:25 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:57:25 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 05:57:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:27 np0005479822 python3.9[147577]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 05:57:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 05:57:27 np0005479822 python3.9[147662]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 05:57:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:28.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:30.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 05:57:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 05:57:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:57:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:32.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:57:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 05:57:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:34.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:34.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:35 np0005479822 podman[147702]: 2025-10-10 09:57:35.018930342 +0000 UTC m=+0.123537394 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 05:57:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:36.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:38.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:57:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:57:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:57:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:57:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095740 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 05:57:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:40.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:40 np0005479822 podman[147958]: 2025-10-10 09:57:40.977953006 +0000 UTC m=+0.074693736 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 05:57:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.190 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 05:57:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 05:57:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 05:57:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:42.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:57:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:57:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:46.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:48.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:50.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:52.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:56 np0005479822 kernel: SELinux:  Converting 2768 SID table entries...
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:57:56 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:57:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:57:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:57:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:57:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:57:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:57:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:57:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:57:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:00.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:05 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 10 05:58:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:06 np0005479822 podman[148088]: 2025-10-10 09:58:06.060590489 +0000 UTC m=+0.143741953 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 05:58:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:06 np0005479822 kernel: SELinux:  Converting 2768 SID table entries...
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:58:06 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:58:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:08.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:10.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:11 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 10 05:58:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:12 np0005479822 podman[148120]: 2025-10-10 09:58:12.00499698 +0000 UTC m=+0.089940597 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:58:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:12.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:14.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:16.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:18.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:18.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:24.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:26.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:32.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:34.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:36.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:37 np0005479822 podman[156807]: 2025-10-10 09:58:37.040584815 +0000 UTC m=+0.131545050 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 05:58:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:38.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 05:58:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 05:58:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 05:58:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.192 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 05:58:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 05:58:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:42.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:42 np0005479822 podman[159433]: 2025-10-10 09:58:42.953419795 +0000 UTC m=+0.060629700 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:58:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:45 np0005479822 podman[160851]: 2025-10-10 09:58:45.494202528 +0000 UTC m=+0.059364199 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 05:58:45 np0005479822 podman[160851]: 2025-10-10 09:58:45.590601494 +0000 UTC m=+0.155763155 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 05:58:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 05:58:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:46 np0005479822 podman[161332]: 2025-10-10 09:58:46.067266233 +0000 UTC m=+0.055392411 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:58:46 np0005479822 podman[161332]: 2025-10-10 09:58:46.073195316 +0000 UTC m=+0.061321494 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 05:58:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:46 np0005479822 podman[161638]: 2025-10-10 09:58:46.456013001 +0000 UTC m=+0.090477604 container exec f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 05:58:46 np0005479822 podman[161638]: 2025-10-10 09:58:46.481759307 +0000 UTC m=+0.116223860 container exec_died f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 05:58:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:46 np0005479822 podman[161857]: 2025-10-10 09:58:46.795209418 +0000 UTC m=+0.071048651 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:58:46 np0005479822 podman[161857]: 2025-10-10 09:58:46.81167835 +0000 UTC m=+0.087517533 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 05:58:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:47 np0005479822 podman[162058]: 2025-10-10 09:58:47.075726065 +0000 UTC m=+0.065785466 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived)
Oct 10 05:58:47 np0005479822 podman[162058]: 2025-10-10 09:58:47.098776458 +0000 UTC m=+0.088835829 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, version=2.2.4, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct 10 05:58:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:48 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:48 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 05:58:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 05:58:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 05:58:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:58:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 05:58:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:54.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:56.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:58:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:58:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:58:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:58:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:58:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:58:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:58:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:00.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:02.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:59:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 05:59:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:07 np0005479822 kernel: SELinux:  Converting 2769 SID table entries...
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability open_perms=1
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability always_check_network=0
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 05:59:07 np0005479822 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 05:59:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:07 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 10 05:59:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:08 np0005479822 podman[165649]: 2025-10-10 09:59:08.088883449 +0000 UTC m=+0.176985278 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 05:59:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:08 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:59:08 np0005479822 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 05:59:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:12.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:13 np0005479822 podman[165906]: 2025-10-10 09:59:13.408722643 +0000 UTC m=+0.116267242 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 05:59:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:16 np0005479822 systemd[1]: Stopping OpenSSH server daemon...
Oct 10 05:59:16 np0005479822 systemd[1]: sshd.service: Deactivated successfully.
Oct 10 05:59:16 np0005479822 systemd[1]: Stopped OpenSSH server daemon.
Oct 10 05:59:16 np0005479822 systemd[1]: sshd.service: Consumed 4.543s CPU time, read 0B from disk, written 44.0K to disk.
Oct 10 05:59:16 np0005479822 systemd[1]: Stopped target sshd-keygen.target.
Oct 10 05:59:16 np0005479822 systemd[1]: Stopping sshd-keygen.target...
Oct 10 05:59:16 np0005479822 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:59:16 np0005479822 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:59:16 np0005479822 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 05:59:16 np0005479822 systemd[1]: Reached target sshd-keygen.target.
Oct 10 05:59:16 np0005479822 systemd[1]: Starting OpenSSH server daemon...
Oct 10 05:59:16 np0005479822 systemd[1]: Started OpenSSH server daemon.
Oct 10 05:59:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:16.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:19 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 05:59:19 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 05:59:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:19 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:19 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:19 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:19 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 05:59:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:21.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:21 np0005479822 systemd[1]: Starting PackageKit Daemon...
Oct 10 05:59:21 np0005479822 systemd[1]: Started PackageKit Daemon.
Oct 10 05:59:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:22.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:24 np0005479822 python3.9[171487]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:59:24 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:24 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:24 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:25.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:25 np0005479822 python3.9[172703]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:59:25 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:25 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:25 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:26 np0005479822 python3.9[173875]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:59:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:27 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:27 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:27 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:28 np0005479822 python3.9[175134]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:59:28 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:28 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:28 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:28 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 05:59:28 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 05:59:28 np0005479822 systemd[1]: man-db-cache-update.service: Consumed 12.275s CPU time.
Oct 10 05:59:28 np0005479822 systemd[1]: run-r0b44768244b348e2aa04d49e38e8b22d.service: Deactivated successfully.
Oct 10 05:59:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:29 np0005479822 python3.9[176026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:29 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:29 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:29 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:30 np0005479822 python3.9[176217]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:31 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:31 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:31 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:32.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:32 np0005479822 python3.9[176407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:33 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:33 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:33 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:34 np0005479822 python3.9[176623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:35 np0005479822 python3.9[176778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:35 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:35 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:35 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:36 np0005479822 python3.9[176969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 05:59:36 np0005479822 systemd[1]: Reloading.
Oct 10 05:59:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 05:59:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 05:59:37 np0005479822 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 10 05:59:37 np0005479822 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 10 05:59:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:38 np0005479822 podman[177134]: 2025-10-10 09:59:38.831727531 +0000 UTC m=+0.155335152 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 05:59:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:39 np0005479822 python3.9[177180]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:40 np0005479822 python3.9[177345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:41 np0005479822 python3.9[177500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:42 np0005479822 python3.9[177656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.192 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 05:59:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 05:59:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 05:59:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:42 np0005479822 python3.9[177811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:43 np0005479822 podman[177939]: 2025-10-10 09:59:43.640689038 +0000 UTC m=+0.054489037 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 05:59:43 np0005479822 python3.9[177986]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:44 np0005479822 python3.9[178141]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:45 np0005479822 python3.9[178297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:46 np0005479822 python3.9[178452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:47 np0005479822 python3.9[178610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:48 np0005479822 python3.9[178765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:59:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:59:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:49 np0005479822 python3.9[178921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:50 np0005479822 python3.9[179076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:59:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:59:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:51 np0005479822 python3.9[179232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 05:59:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:52 np0005479822 python3.9[179387]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:53 np0005479822 python3.9[179540]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:54 np0005479822 python3.9[179717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:55 np0005479822 python3.9[179869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:55 np0005479822 python3.9[180022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:56 np0005479822 python3.9[180174]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 05:59:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:56.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 05:59:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 05:59:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 05:59:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:57 np0005479822 python3.9[180326]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:59:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:58 np0005479822 python3.9[180452]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090396.8007727-1623-238081130552531/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 05:59:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 05:59:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 05:59:59 np0005479822 python3.9[180604]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 05:59:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 05:59:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 05:59:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 05:59:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 05:59:59 np0005479822 python3.9[180730]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090398.5038142-1623-154772745150641/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:00 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 06:00:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:00 np0005479822 python3.9[180882]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:01 np0005479822 python3.9[181007]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090399.957148-1623-23640111150468/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:01 np0005479822 python3.9[181160]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:02 np0005479822 python3.9[181285]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090401.3041759-1623-236170911593740/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:03 np0005479822 python3.9[181487]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:03 np0005479822 python3.9[181644]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090402.6463683-1623-227007920265934/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:04 np0005479822 python3.9[181796]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:05 np0005479822 python3.9[181921]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090404.0837252-1623-27713314352602/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:05 np0005479822 python3.9[182074]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:06 np0005479822 python3.9[182197]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090405.4512913-1623-216920132632203/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:06.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:07 np0005479822 python3.9[182349]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:00:08 np0005479822 python3.9[182475]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090406.7927184-1623-235624757740266/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:08 np0005479822 python3.9[182627]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 10 06:00:09 np0005479822 podman[182629]: 2025-10-10 10:00:09.01960784 +0000 UTC m=+0.119487396 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:00:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:09 np0005479822 python3.9[182805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:10 np0005479822 python3.9[182957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:11 np0005479822 python3.9[183109]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:11 np0005479822 python3.9[183262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:12 np0005479822 python3.9[183414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:13 np0005479822 python3.9[183591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:00:13 np0005479822 podman[183716]: 2025-10-10 10:00:13.849282629 +0000 UTC m=+0.085020252 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:00:14 np0005479822 python3.9[183780]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:14 np0005479822 python3.9[183939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:15 np0005479822 python3.9[184091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:16 np0005479822 python3.9[184244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:17.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:17 np0005479822 python3.9[184396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:17 np0005479822 python3.9[184550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:18 np0005479822 python3.9[184702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:18.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:19 np0005479822 python3.9[184855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100020 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:00:20 np0005479822 python3.9[185008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:21 np0005479822 python3.9[185131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090419.8174133-2286-270545624366074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:21 np0005479822 python3.9[185284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:22 np0005479822 python3.9[185407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090421.228271-2286-79502027795312/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:22.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:23 np0005479822 python3.9[185559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:23 np0005479822 python3.9[185683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090422.678692-2286-265801747247513/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:24 np0005479822 python3.9[185835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:25 np0005479822 python3.9[185958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090424.0010319-2286-40562018178758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040f0 fd 48 proxy ignored for local
Oct 10 06:00:25 np0005479822 kernel: ganesha.nfsd[148040]: segfault at 50 ip 00007f49e966232e sp 00007f49b8ff8210 error 4 in libntirpc.so.5.8[7f49e9647000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 06:00:25 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:00:25 np0005479822 systemd[1]: Started Process Core Dump (PID 186036/UID 0).
Oct 10 06:00:25 np0005479822 python3.9[186113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:26 np0005479822 python3.9[186236]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090425.2710648-2286-93488743776918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:26 np0005479822 systemd-coredump[186045]: Process 137201 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 59:#012#0  0x00007f49e966232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:00:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:26 np0005479822 systemd[1]: systemd-coredump@3-186036-0.service: Deactivated successfully.
Oct 10 06:00:26 np0005479822 systemd[1]: systemd-coredump@3-186036-0.service: Consumed 1.369s CPU time.
Oct 10 06:00:26 np0005479822 podman[186390]: 2025-10-10 10:00:26.918634661 +0000 UTC m=+0.047979698 container died f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:00:26 np0005479822 systemd[1]: var-lib-containers-storage-overlay-e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e-merged.mount: Deactivated successfully.
Oct 10 06:00:26 np0005479822 podman[186390]: 2025-10-10 10:00:26.961742808 +0000 UTC m=+0.091087845 container remove f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 06:00:26 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:00:27 np0005479822 python3.9[186399]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:27.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:27 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:00:27 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.348s CPU time.
Oct 10 06:00:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:27 np0005479822 python3.9[186561]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090426.579293-2286-156730524990194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:28 np0005479822 python3.9[186713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:28.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:29 np0005479822 python3.9[186836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090427.8587306-2286-233201256705247/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:29 np0005479822 python3.9[186989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:30 np0005479822 python3.9[187112]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090429.360766-2286-78106258280269/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:30.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:31 np0005479822 python3.9[187264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100031 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:00:31 np0005479822 python3.9[187388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090430.7404933-2286-51877306949992/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:32 np0005479822 python3.9[187540]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:32.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:33 np0005479822 python3.9[187663]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090432.1004825-2286-147900639177621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:34 np0005479822 python3.9[187816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:34 np0005479822 python3.9[187964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090433.4790075-2286-182413493181529/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:34.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:35 np0005479822 python3.9[188116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:35 np0005479822 python3.9[188240]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090434.8267438-2286-103925834680979/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:36 np0005479822 python3.9[188392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:36.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:37 np0005479822 python3.9[188515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090435.994841-2286-177379231252413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:37 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 4.
Oct 10 06:00:37 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:00:37 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.348s CPU time.
Oct 10 06:00:37 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:00:37 np0005479822 podman[188712]: 2025-10-10 10:00:37.617488034 +0000 UTC m=+0.046755456 container create 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 06:00:37 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:00:37 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:00:37 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:00:37 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:00:37 np0005479822 podman[188712]: 2025-10-10 10:00:37.596171652 +0000 UTC m=+0.025439094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:00:37 np0005479822 podman[188712]: 2025-10-10 10:00:37.704784075 +0000 UTC m=+0.134051577 container init 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 06:00:37 np0005479822 podman[188712]: 2025-10-10 10:00:37.711115825 +0000 UTC m=+0.140383277 container start 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 06:00:37 np0005479822 bash[188712]: 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:00:37 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:00:37 np0005479822 python3.9[188720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:00:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:00:38 np0005479822 python3.9[188893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090437.2839243-2286-137998623201377/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:00:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:00:39 np0005479822 python3.9[189043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:00:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:39 np0005479822 podman[189171]: 2025-10-10 10:00:39.93641459 +0000 UTC m=+0.088842424 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 06:00:40 np0005479822 python3.9[189216]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 10 06:00:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:41 np0005479822 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.921210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441921260, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4656, "num_deletes": 502, "total_data_size": 12891337, "memory_usage": 13062144, "flush_reason": "Manual Compaction"}
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441959017, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8357485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13278, "largest_seqno": 17929, "table_properties": {"data_size": 8339729, "index_size": 12010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8303208, "raw_average_value_size": 4480, "num_data_blocks": 525, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089994, "oldest_key_time": 1760089994, "file_creation_time": 1760090441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 37848 microseconds, and 16367 cpu microseconds.
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.959064) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8357485 bytes OK
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.959085) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960288) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960304) EVENT_LOG_v1 {"time_micros": 1760090441960299, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12870831, prev total WAL file size 12870831, number of live WAL files 2.
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.963238) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8161KB)], [27(12MB)]
Oct 10 06:00:41 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441963289, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21059165, "oldest_snapshot_seqno": -1}
Oct 10 06:00:42 np0005479822 python3.9[189385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5073 keys, 15514300 bytes, temperature: kUnknown
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442054500, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15514300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15475740, "index_size": 24754, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126897, "raw_average_key_size": 25, "raw_value_size": 15379141, "raw_average_value_size": 3031, "num_data_blocks": 1042, "num_entries": 5073, "num_filter_entries": 5073, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.054685) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15514300 bytes
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.055975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.8 rd, 170.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.1 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6095, records dropped: 1022 output_compression: NoCompression
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.055991) EVENT_LOG_v1 {"time_micros": 1760090442055983, "job": 14, "event": "compaction_finished", "compaction_time_micros": 91260, "compaction_time_cpu_micros": 52399, "output_level": 6, "num_output_files": 1, "total_output_size": 15514300, "num_input_records": 6095, "num_output_records": 5073, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442057387, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442059647, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.963126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:00:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:00:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:00:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:00:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:42 np0005479822 python3.9[189537]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:43 np0005479822 python3.9[189690]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:00:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:00:43 np0005479822 podman[189779]: 2025-10-10 10:00:43.999407932 +0000 UTC m=+0.079662148 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:00:44 np0005479822 python3.9[189859]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:45 np0005479822 python3.9[190011]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:45 np0005479822 python3.9[190164]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100046 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:00:46 np0005479822 python3.9[190316]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:47 np0005479822 auditd[702]: Audit daemon rotating log files
Oct 10 06:00:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:47 np0005479822 python3.9[190468]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:47 np0005479822 python3.9[190621]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:48 np0005479822 python3.9[190773]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:49 np0005479822 python3.9[190926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:00:49 np0005479822 systemd[1]: Reloading.
Oct 10 06:00:49 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:00:49 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000010:nfs.cephfs.0: -2
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:00:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:00:50 np0005479822 systemd[1]: Starting libvirt logging daemon socket...
Oct 10 06:00:50 np0005479822 systemd[1]: Listening on libvirt logging daemon socket.
Oct 10 06:00:50 np0005479822 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 10 06:00:50 np0005479822 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 10 06:00:50 np0005479822 systemd[1]: Starting libvirt logging daemon...
Oct 10 06:00:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:50 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:50 np0005479822 systemd[1]: Started libvirt logging daemon.
Oct 10 06:00:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:50 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:51 np0005479822 python3.9[191136]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:00:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:51 np0005479822 systemd[1]: Reloading.
Oct 10 06:00:51 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:00:51 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:00:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:51 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:51 np0005479822 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 10 06:00:51 np0005479822 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 10 06:00:51 np0005479822 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 10 06:00:51 np0005479822 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 10 06:00:51 np0005479822 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 10 06:00:51 np0005479822 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 10 06:00:51 np0005479822 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 06:00:51 np0005479822 systemd[1]: Started libvirt nodedev daemon.
Oct 10 06:00:52 np0005479822 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 10 06:00:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:52 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:52 np0005479822 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 10 06:00:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:52 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:52 np0005479822 python3.9[191353]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:00:52 np0005479822 systemd[1]: Reloading.
Oct 10 06:00:52 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:00:52 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:00:52 np0005479822 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 10 06:00:52 np0005479822 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 10 06:00:52 np0005479822 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 10 06:00:52 np0005479822 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 10 06:00:52 np0005479822 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 10 06:00:52 np0005479822 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 10 06:00:52 np0005479822 systemd[1]: Starting libvirt proxy daemon...
Oct 10 06:00:52 np0005479822 systemd[1]: Started libvirt proxy daemon.
Oct 10 06:00:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100053 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:00:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:53 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:53 np0005479822 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e878a9cd-65b7-41e1-ab4d-cb9e8b771563
Oct 10 06:00:53 np0005479822 python3.9[191572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:00:53 np0005479822 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 10 06:00:53 np0005479822 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e878a9cd-65b7-41e1-ab4d-cb9e8b771563
Oct 10 06:00:53 np0005479822 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 10 06:00:53 np0005479822 systemd[1]: Reloading.
Oct 10 06:00:53 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:00:53 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:00:54 np0005479822 systemd[1]: Listening on libvirt locking daemon socket.
Oct 10 06:00:54 np0005479822 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 10 06:00:54 np0005479822 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 10 06:00:54 np0005479822 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 10 06:00:54 np0005479822 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 10 06:00:54 np0005479822 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 10 06:00:54 np0005479822 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 10 06:00:54 np0005479822 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 10 06:00:54 np0005479822 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 10 06:00:54 np0005479822 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 10 06:00:54 np0005479822 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 06:00:54 np0005479822 systemd[1]: Started libvirt QEMU daemon.
Oct 10 06:00:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:54 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:54 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:54.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:54 np0005479822 python3.9[191811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:00:54 np0005479822 systemd[1]: Reloading.
Oct 10 06:00:55 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:00:55 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:00:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:55 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:55 np0005479822 systemd[1]: Starting libvirt secret daemon socket...
Oct 10 06:00:55 np0005479822 systemd[1]: Listening on libvirt secret daemon socket.
Oct 10 06:00:55 np0005479822 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 10 06:00:55 np0005479822 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 10 06:00:55 np0005479822 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 10 06:00:55 np0005479822 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 10 06:00:55 np0005479822 systemd[1]: Starting libvirt secret daemon...
Oct 10 06:00:55 np0005479822 systemd[1]: Started libvirt secret daemon.
Oct 10 06:00:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:56 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:56 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:00:56 np0005479822 python3.9[192023]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:00:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:00:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:00:57 np0005479822 python3.9[192175]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 06:00:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:57.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:00:57 np0005479822 kernel: ganesha.nfsd[190969]: segfault at 50 ip 00007fa19e31632e sp 00007fa1637fd210 error 4 in libntirpc.so.5.8[7fa19e2fb000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 06:00:57 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:00:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:57 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy ignored for local
Oct 10 06:00:57 np0005479822 systemd[1]: Started Process Core Dump (PID 192201/UID 0).
Oct 10 06:00:58 np0005479822 python3.9[192330]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:00:58 np0005479822 systemd-coredump[192202]: Process 188733 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007fa19e31632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:00:58 np0005479822 systemd[1]: systemd-coredump@4-192201-0.service: Deactivated successfully.
Oct 10 06:00:58 np0005479822 podman[192363]: 2025-10-10 10:00:58.454745905 +0000 UTC m=+0.046141210 container died 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 06:00:58 np0005479822 systemd[1]: var-lib-containers-storage-overlay-917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a-merged.mount: Deactivated successfully.
Oct 10 06:00:58 np0005479822 podman[192363]: 2025-10-10 10:00:58.518052992 +0000 UTC m=+0.109448297 container remove 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 06:00:58 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:00:58 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:00:58 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.412s CPU time.
Oct 10 06:00:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:58.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:00:59 np0005479822 python3.9[192532]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 06:00:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:00:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:00:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:00 np0005479822 python3.9[192683]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:00.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:01 np0005479822 python3.9[192804]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090459.4863524-3360-43501129050237/.source.xml follow=False _original_basename=secret.xml.j2 checksum=baa25a2f67c100fe0cd0e069ccc25ef935446dd6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:01 np0005479822 python3.9[192968]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 21f084a3-af34-5230-afe4-ea5cd24a55f4#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:01:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:02 np0005479822 python3.9[193134]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100103 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:01:03 np0005479822 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 10 06:01:03 np0005479822 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 10 06:01:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:06 np0005479822 python3.9[193599]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:07 np0005479822 python3.9[193751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:07 np0005479822 python3.9[193875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090466.7360697-3525-192718964490057/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:08 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 5.
Oct 10 06:01:08 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:01:08 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.412s CPU time.
Oct 10 06:01:08 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:01:08 np0005479822 python3.9[194027]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:08.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100109 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:01:09 np0005479822 podman[194113]: 2025-10-10 10:01:09.177978841 +0000 UTC m=+0.049732245 container create e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 06:01:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:09 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:09 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:09 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:09 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:09 np0005479822 podman[194113]: 2025-10-10 10:01:09.1574801 +0000 UTC m=+0.029233524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:01:09 np0005479822 podman[194113]: 2025-10-10 10:01:09.257619227 +0000 UTC m=+0.129372641 container init e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 06:01:09 np0005479822 podman[194113]: 2025-10-10 10:01:09.27487582 +0000 UTC m=+0.146629224 container start e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:01:09 np0005479822 bash[194113]: e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8
Oct 10 06:01:09 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:01:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:01:09 np0005479822 python3.9[194281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:10 np0005479822 podman[194331]: 2025-10-10 10:01:10.242880237 +0000 UTC m=+0.100999040 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 10 06:01:10 np0005479822 python3.9[194379]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:10.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:11 np0005479822 python3.9[194534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:11 np0005479822 python3.9[194613]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ukm_xrch recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:12 np0005479822 python3.9[194765]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:12.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:13 np0005479822 python3.9[194856]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:01:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:01:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:01:13 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:01:13 np0005479822 python3.9[195077]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:01:14 np0005479822 podman[195127]: 2025-10-10 10:01:14.272825562 +0000 UTC m=+0.061878090 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:01:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:14 np0005479822 python3[195275]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 06:01:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:01:15 np0005479822 python3.9[195428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:01:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:01:16 np0005479822 python3.9[195506]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:17.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:17 np0005479822 python3.9[195658]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:17 np0005479822 python3.9[195737]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:18 np0005479822 python3.9[195889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:19.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:19 np0005479822 python3.9[195992]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:01:19 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:01:20 np0005479822 python3.9[196145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:20 np0005479822 python3.9[196223]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:20.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:21 np0005479822 python3.9[196375]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:01:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:01:22 np0005479822 python3.9[196501]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090480.7944658-3900-190061166664170/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:22 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff408000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:22 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3fc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:22.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:22 np0005479822 python3.9[196669]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:23 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:23 np0005479822 python3.9[196822]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.789268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483789367, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 252, "total_data_size": 1233775, "memory_usage": 1252280, "flush_reason": "Manual Compaction"}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483796516, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 571764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17934, "largest_seqno": 18585, "table_properties": {"data_size": 568864, "index_size": 872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7595, "raw_average_key_size": 19, "raw_value_size": 562820, "raw_average_value_size": 1481, "num_data_blocks": 38, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090442, "oldest_key_time": 1760090442, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7307 microseconds, and 4457 cpu microseconds.
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.796580) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 571764 bytes OK
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.796604) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797822) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797845) EVENT_LOG_v1 {"time_micros": 1760090483797838, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1230155, prev total WAL file size 1230155, number of live WAL files 2.
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.798701) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(558KB)], [30(14MB)]
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483798763, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16086064, "oldest_snapshot_seqno": -1}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4951 keys, 12217286 bytes, temperature: kUnknown
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483877479, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12217286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12183668, "index_size": 20132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 124747, "raw_average_key_size": 25, "raw_value_size": 12093240, "raw_average_value_size": 2442, "num_data_blocks": 840, "num_entries": 4951, "num_filter_entries": 4951, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.877804) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12217286 bytes
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.879227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.0 rd, 155.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(49.5) write-amplify(21.4) OK, records in: 5453, records dropped: 502 output_compression: NoCompression
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.879257) EVENT_LOG_v1 {"time_micros": 1760090483879244, "job": 16, "event": "compaction_finished", "compaction_time_micros": 78841, "compaction_time_cpu_micros": 50327, "output_level": 6, "num_output_files": 1, "total_output_size": 12217286, "num_input_records": 5453, "num_output_records": 4951, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483879908, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483885101, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.798605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:23 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:01:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3f8001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:24 np0005479822 python3.9[196977]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:01:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:01:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100125 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:01:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:25 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3fc002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:25 np0005479822 python3.9[197130]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:01:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:26 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:26 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:26 np0005479822 python3.9[197283]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:01:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:27 np0005479822 kernel: ganesha.nfsd[196507]: segfault at 50 ip 00007ff4b5ba132e sp 00007ff47affc210 error 4 in libntirpc.so.5.8[7ff4b5b86000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 06:01:27 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:01:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:27 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3f8001d50 fd 38 proxy ignored for local
Oct 10 06:01:27 np0005479822 python3.9[197437]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:01:27 np0005479822 systemd[1]: Started Process Core Dump (PID 197439/UID 0).
Oct 10 06:01:28 np0005479822 python3.9[197595]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:28 np0005479822 systemd-coredump[197442]: Process 194138 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007ff4b5ba132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:01:28 np0005479822 systemd[1]: systemd-coredump@5-197439-0.service: Deactivated successfully.
Oct 10 06:01:28 np0005479822 systemd[1]: systemd-coredump@5-197439-0.service: Consumed 1.176s CPU time.
Oct 10 06:01:28 np0005479822 podman[197699]: 2025-10-10 10:01:28.79772218 +0000 UTC m=+0.047536326 container died e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:01:28 np0005479822 systemd[1]: var-lib-containers-storage-overlay-035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae-merged.mount: Deactivated successfully.
Oct 10 06:01:28 np0005479822 podman[197699]: 2025-10-10 10:01:28.850098236 +0000 UTC m=+0.099912322 container remove e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:01:28 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:01:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:29 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:01:29 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.738s CPU time.
Oct 10 06:01:29 np0005479822 python3.9[197777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:29 np0005479822 python3.9[197918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090488.548715-4116-232742658046124/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:30 np0005479822 python3.9[198070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100131 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:01:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:31 np0005479822 python3.9[198193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090490.0707424-4161-70759474067544/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:32 np0005479822 python3.9[198346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:01:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:32 np0005479822 python3.9[198469]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090491.5710294-4206-122432080071250/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:01:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:33.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100133 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:01:33 np0005479822 python3.9[198622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:01:33 np0005479822 systemd[1]: Reloading.
Oct 10 06:01:33 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:01:33 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:01:34 np0005479822 systemd[1]: Reached target edpm_libvirt.target.
Oct 10 06:01:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:35 np0005479822 python3.9[198837]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 06:01:35 np0005479822 systemd[1]: Reloading.
Oct 10 06:01:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:35 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:01:35 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:01:35 np0005479822 systemd[1]: Reloading.
Oct 10 06:01:35 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:01:35 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:01:36 np0005479822 systemd[1]: session-53.scope: Deactivated successfully.
Oct 10 06:01:36 np0005479822 systemd[1]: session-53.scope: Consumed 4min 3.026s CPU time.
Oct 10 06:01:36 np0005479822 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Oct 10 06:01:36 np0005479822 systemd-logind[789]: Removed session 53.
Oct 10 06:01:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:39 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 6.
Oct 10 06:01:39 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:01:39 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.738s CPU time.
Oct 10 06:01:39 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:01:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:39 np0005479822 podman[198983]: 2025-10-10 10:01:39.49609815 +0000 UTC m=+0.064159992 container create 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 06:01:39 np0005479822 podman[198983]: 2025-10-10 10:01:39.463543887 +0000 UTC m=+0.031605799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:01:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:01:39 np0005479822 podman[198983]: 2025-10-10 10:01:39.583980968 +0000 UTC m=+0.152042830 container init 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 06:01:39 np0005479822 podman[198983]: 2025-10-10 10:01:39.594047167 +0000 UTC m=+0.162108989 container start 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 06:01:39 np0005479822 bash[198983]: 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:01:39 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:01:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:01:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:40.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:41 np0005479822 podman[199041]: 2025-10-10 10:01:41.047768084 +0000 UTC m=+0.144227730 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 06:01:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:41 np0005479822 systemd-logind[789]: New session 54 of user zuul.
Oct 10 06:01:41 np0005479822 systemd[1]: Started Session 54 of User zuul.
Oct 10 06:01:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:01:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:01:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:01:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:42 np0005479822 python3.9[199222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 06:01:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:42.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:44 np0005479822 python3.9[199379]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:01:44 np0005479822 podman[199503]: 2025-10-10 10:01:44.844354949 +0000 UTC m=+0.085673449 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:01:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:44.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:45 np0005479822 python3.9[199547]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:01:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:45 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:01:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:45 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:01:45 np0005479822 python3.9[199701]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:01:46 np0005479822 python3.9[199853]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 06:01:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:46.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:47 np0005479822 python3.9[200005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:01:48 np0005479822 python3.9[200158]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:01:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:49 np0005479822 python3.9[200312]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:01:49 np0005479822 systemd[1]: Reloading.
Oct 10 06:01:49 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:01:49 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:01:50 np0005479822 python3.9[200503]: ansible-ansible.builtin.service_facts Invoked
Oct 10 06:01:50 np0005479822 network[200520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 06:01:50 np0005479822 network[200521]: 'network-scripts' will be removed from distribution in near future.
Oct 10 06:01:50 np0005479822 network[200522]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 06:01:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:01:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:01:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:52 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8068000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:52 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:52.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:53.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:53 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:54 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:54 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:01:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:01:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100155 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:01:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:55 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:56 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f80440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:56 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:57 np0005479822 python3.9[200839]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:01:57 np0005479822 systemd[1]: Reloading.
Oct 10 06:01:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:57.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:01:57 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:01:57 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:01:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:57 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:58 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:58 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f80440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:58 np0005479822 python3.9[201027]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:01:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:01:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:58.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:01:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:01:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:01:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:01:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:59 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:01:59 np0005479822 python3.9[201179]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 06:02:00 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:02:00 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:02:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:00 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:00 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:01 np0005479822 podman[201194]: 2025-10-10 10:02:01.044461884 +0000 UTC m=+1.452868035 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.261480082 +0000 UTC m=+0.065781008 container create 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:02:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.2956] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.229410751 +0000 UTC m=+0.033711737 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 06:02:01 np0005479822 kernel: veth0: entered allmulticast mode
Oct 10 06:02:01 np0005479822 kernel: veth0: entered promiscuous mode
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3278] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered forwarding state
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3312] device (veth0): carrier: link connected
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3317] device (podman0): carrier: link connected
Oct 10 06:02:01 np0005479822 systemd-udevd[201285]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:02:01 np0005479822 systemd-udevd[201282]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3773] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3782] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3794] device (podman0): Activation: starting connection 'podman0' (81076ce4-9a9e-4b26-b84c-99af0a2be891)
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3795] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3806] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3808] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.3811] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 06:02:01 np0005479822 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 06:02:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:01 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.4160] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.4162] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.4172] device (podman0): Activation: successful, device activated.
Oct 10 06:02:01 np0005479822 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 10 06:02:01 np0005479822 systemd[1]: Started libpod-conmon-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope.
Oct 10 06:02:01 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.677234885 +0000 UTC m=+0.481535811 container init 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.685852301 +0000 UTC m=+0.490153227 container start 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.689601884 +0000 UTC m=+0.493902810 container attach 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 10 06:02:01 np0005479822 iscsid_config[201413]: iqn.1994-05.com.redhat:fcb4321b495f#015
Oct 10 06:02:01 np0005479822 systemd[1]: libpod-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope: Deactivated successfully.
Oct 10 06:02:01 np0005479822 podman[201254]: 2025-10-10 10:02:01.694978861 +0000 UTC m=+0.499279777 container died 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 06:02:01 np0005479822 kernel: veth0 (unregistering): left allmulticast mode
Oct 10 06:02:01 np0005479822 kernel: veth0 (unregistering): left promiscuous mode
Oct 10 06:02:01 np0005479822 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 06:02:01 np0005479822 NetworkManager[44982]: <info>  [1760090521.7746] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:02:02 np0005479822 systemd[1]: run-netns-netns\x2d342413df\x2de8a7\x2d9394\x2d7d06\x2d1c893eba8cfb.mount: Deactivated successfully.
Oct 10 06:02:02 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175-userdata-shm.mount: Deactivated successfully.
Oct 10 06:02:02 np0005479822 systemd[1]: var-lib-containers-storage-overlay-589e2b0bc08fc8cef88f6788646d4297ff98dfa271e5e29c49a9e276f23372ba-merged.mount: Deactivated successfully.
Oct 10 06:02:02 np0005479822 podman[201254]: 2025-10-10 10:02:02.141748967 +0000 UTC m=+0.946049903 container remove 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 06:02:02 np0005479822 systemd[1]: libpod-conmon-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope: Deactivated successfully.
Oct 10 06:02:02 np0005479822 python3.9[201179]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 10 06:02:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:02 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:02 np0005479822 python3.9[201179]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 10 06:02:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:02 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:02.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:03 np0005479822 python3.9[201651]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:03 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:04 np0005479822 python3.9[201775]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090522.5788796-318-232663492197741/.source.iscsi _original_basename=.1e4w866_ follow=False checksum=538f0a6547d2d2c444bf7bb1ebe39f3e6e4f45dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:04 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:04 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:04 np0005479822 python3.9[201927]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:02:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:04.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:02:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:05 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:05 np0005479822 python3.9[202078]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:06 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:06 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:06 np0005479822 python3.9[202232]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:06.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:07 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:07 np0005479822 python3.9[202385]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:08 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:08 np0005479822 python3.9[202537]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:08 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:08 np0005479822 python3.9[202615]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:09.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:09 np0005479822 kernel: ganesha.nfsd[200532]: segfault at 50 ip 00007f811816032e sp 00007f80e67fb210 error 4 in libntirpc.so.5.8[7f8118145000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 10 06:02:09 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:02:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:09 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy ignored for local
Oct 10 06:02:09 np0005479822 systemd[1]: Started Process Core Dump (PID 202740/UID 0).
Oct 10 06:02:09 np0005479822 python3.9[202770]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:10 np0005479822 python3.9[202848]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:10 np0005479822 systemd-coredump[202743]: Process 199003 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f811816032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:02:10 np0005479822 systemd[1]: systemd-coredump@6-202740-0.service: Deactivated successfully.
Oct 10 06:02:10 np0005479822 systemd[1]: systemd-coredump@6-202740-0.service: Consumed 1.166s CPU time.
Oct 10 06:02:10 np0005479822 podman[202964]: 2025-10-10 10:02:10.753503857 +0000 UTC m=+0.045442118 container died 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 06:02:10 np0005479822 systemd[1]: var-lib-containers-storage-overlay-a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b-merged.mount: Deactivated successfully.
Oct 10 06:02:10 np0005479822 podman[202964]: 2025-10-10 10:02:10.822006167 +0000 UTC m=+0.113944428 container remove 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:02:10 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:02:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:11 np0005479822 python3.9[203019]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:11 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:02:11 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.840s CPU time.
Oct 10 06:02:11 np0005479822 podman[203050]: 2025-10-10 10:02:11.201143256 +0000 UTC m=+0.110479015 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 10 06:02:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:11 np0005479822 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 06:02:11 np0005479822 python3.9[203227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:12 np0005479822 python3.9[203305]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:13.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:13 np0005479822 python3.9[203457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:13 np0005479822 python3.9[203536]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:14 np0005479822 python3.9[203713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:02:14 np0005479822 systemd[1]: Reloading.
Oct 10 06:02:14 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:02:14 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:02:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:15 np0005479822 podman[203750]: 2025-10-10 10:02:15.274643081 +0000 UTC m=+0.074392683 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:02:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100215 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:02:16 np0005479822 python3.9[203922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:16 np0005479822 python3.9[204000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:17.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:17 np0005479822 python3.9[204152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:17 np0005479822 python3.9[204231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:18 np0005479822 python3.9[204383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:02:18 np0005479822 systemd[1]: Reloading.
Oct 10 06:02:18 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:02:18 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:02:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:19 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 06:02:19 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 06:02:19 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 06:02:19 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 06:02:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:19.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:20 np0005479822 python3.9[204658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:02:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:02:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:02:20 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:02:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:21 np0005479822 python3.9[204810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 7.
Oct 10 06:02:21 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:02:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.840s CPU time.
Oct 10 06:02:21 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:02:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:21 np0005479822 podman[204932]: 2025-10-10 10:02:21.486094388 +0000 UTC m=+0.071786251 container create 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 06:02:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:21 np0005479822 podman[204932]: 2025-10-10 10:02:21.455008955 +0000 UTC m=+0.040700828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:02:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:21 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:21 np0005479822 podman[204932]: 2025-10-10 10:02:21.568290805 +0000 UTC m=+0.153982668 container init 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 06:02:21 np0005479822 podman[204932]: 2025-10-10 10:02:21.580401797 +0000 UTC m=+0.166093630 container start 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 06:02:21 np0005479822 bash[204932]: 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:02:21 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:02:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:02:21 np0005479822 python3.9[205001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090540.5120976-781-139326340207066/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:23 np0005479822 python3.9[205192]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:23 np0005479822 python3.9[205345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:24 np0005479822 python3.9[205468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090543.3185034-855-167800353977167/.source.json _original_basename=.9yz8mjl1 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:25.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:25 np0005479822 python3.9[205620]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:02:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:02:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:27.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:27 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:02:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:27 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:02:28 np0005479822 python3.9[206074]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 10 06:02:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:29 np0005479822 python3.9[206226]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 06:02:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:29.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:30 np0005479822 python3.9[206379]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 06:02:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:31.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:32 np0005479822 python3[206558]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 06:02:32 np0005479822 podman[206594]: 2025-10-10 10:02:32.654488614 +0000 UTC m=+0.057121219 container create 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:02:32 np0005479822 podman[206594]: 2025-10-10 10:02:32.622890607 +0000 UTC m=+0.025523282 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 06:02:32 np0005479822 python3[206558]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 06:02:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:33.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:02:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:02:33 np0005479822 python3.9[206785]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:34 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:34 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:34 np0005479822 python3.9[206968]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:35 np0005479822 python3.9[207056]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:35 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:35 np0005479822 python3.9[207208]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090555.310925-1119-214567832608445/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:36 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:36 np0005479822 python3.9[207284]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 06:02:36 np0005479822 systemd[1]: Reloading.
Oct 10 06:02:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:36 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:36 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:02:36 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:02:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:37.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100237 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:02:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:37 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:37 np0005479822 python3.9[207396]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:02:37 np0005479822 systemd[1]: Reloading.
Oct 10 06:02:37 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:02:37 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:02:37 np0005479822 systemd[1]: Starting iscsid container...
Oct 10 06:02:38 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:02:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:38 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:02:38 np0005479822 systemd[1]: Started /usr/bin/podman healthcheck run 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143.
Oct 10 06:02:38 np0005479822 podman[207436]: 2025-10-10 10:02:38.115566481 +0000 UTC m=+0.170159441 container init 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 10 06:02:38 np0005479822 iscsid[207451]: + sudo -E kolla_set_configs
Oct 10 06:02:38 np0005479822 podman[207436]: 2025-10-10 10:02:38.1446577 +0000 UTC m=+0.199250650 container start 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 06:02:38 np0005479822 podman[207436]: iscsid
Oct 10 06:02:38 np0005479822 systemd[1]: Started iscsid container.
Oct 10 06:02:38 np0005479822 systemd[1]: Created slice User Slice of UID 0.
Oct 10 06:02:38 np0005479822 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 06:02:38 np0005479822 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 06:02:38 np0005479822 systemd[1]: Starting User Manager for UID 0...
Oct 10 06:02:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:38 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:38 np0005479822 podman[207457]: 2025-10-10 10:02:38.266829434 +0000 UTC m=+0.100290844 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 06:02:38 np0005479822 systemd[1]: 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143-2e6a7298cd651085.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 06:02:38 np0005479822 systemd[1]: 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143-2e6a7298cd651085.service: Failed with result 'exit-code'.
Oct 10 06:02:38 np0005479822 systemd[207477]: Queued start job for default target Main User Target.
Oct 10 06:02:38 np0005479822 systemd[207477]: Created slice User Application Slice.
Oct 10 06:02:38 np0005479822 systemd[207477]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 06:02:38 np0005479822 systemd[207477]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 06:02:38 np0005479822 systemd[207477]: Reached target Paths.
Oct 10 06:02:38 np0005479822 systemd[207477]: Reached target Timers.
Oct 10 06:02:38 np0005479822 systemd[207477]: Starting D-Bus User Message Bus Socket...
Oct 10 06:02:38 np0005479822 systemd[207477]: Starting Create User's Volatile Files and Directories...
Oct 10 06:02:38 np0005479822 systemd[207477]: Finished Create User's Volatile Files and Directories.
Oct 10 06:02:38 np0005479822 systemd[207477]: Listening on D-Bus User Message Bus Socket.
Oct 10 06:02:38 np0005479822 systemd[207477]: Reached target Sockets.
Oct 10 06:02:38 np0005479822 systemd[207477]: Reached target Basic System.
Oct 10 06:02:38 np0005479822 systemd[207477]: Reached target Main User Target.
Oct 10 06:02:38 np0005479822 systemd[207477]: Startup finished in 131ms.
Oct 10 06:02:38 np0005479822 systemd[1]: Started User Manager for UID 0.
Oct 10 06:02:38 np0005479822 systemd[1]: Started Session c3 of User root.
Oct 10 06:02:38 np0005479822 iscsid[207451]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 06:02:38 np0005479822 iscsid[207451]: INFO:__main__:Validating config file
Oct 10 06:02:38 np0005479822 iscsid[207451]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 06:02:38 np0005479822 iscsid[207451]: INFO:__main__:Writing out command to execute
Oct 10 06:02:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:38 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:38 np0005479822 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 10 06:02:38 np0005479822 iscsid[207451]: ++ cat /run_command
Oct 10 06:02:38 np0005479822 iscsid[207451]: + CMD='/usr/sbin/iscsid -f'
Oct 10 06:02:38 np0005479822 iscsid[207451]: + ARGS=
Oct 10 06:02:38 np0005479822 iscsid[207451]: + sudo kolla_copy_cacerts
Oct 10 06:02:38 np0005479822 systemd[1]: Started Session c4 of User root.
Oct 10 06:02:38 np0005479822 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 10 06:02:38 np0005479822 iscsid[207451]: Running command: '/usr/sbin/iscsid -f'
Oct 10 06:02:38 np0005479822 iscsid[207451]: + [[ ! -n '' ]]
Oct 10 06:02:38 np0005479822 iscsid[207451]: + . kolla_extend_start
Oct 10 06:02:38 np0005479822 iscsid[207451]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 10 06:02:38 np0005479822 iscsid[207451]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 10 06:02:38 np0005479822 iscsid[207451]: + umask 0022
Oct 10 06:02:38 np0005479822 iscsid[207451]: + exec /usr/sbin/iscsid -f
Oct 10 06:02:38 np0005479822 kernel: Loading iSCSI transport class v2.0-870.
Oct 10 06:02:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:39 np0005479822 python3.9[207655]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:39.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:39 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:39 np0005479822 python3.9[207808]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:40 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:40 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:40 np0005479822 python3.9[207960]: ansible-ansible.builtin.service_facts Invoked
Oct 10 06:02:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:41.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:41 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:42 np0005479822 podman[207964]: 2025-10-10 10:02:42.030814883 +0000 UTC m=+0.119973754 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 10 06:02:42 np0005479822 network[208005]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 06:02:42 np0005479822 network[208006]: 'network-scripts' will be removed from distribution in near future.
Oct 10 06:02:42 np0005479822 network[208007]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 06:02:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.196 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:02:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:02:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:02:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:42 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:42 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:43.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:43 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:44 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:44 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:45 np0005479822 podman[208100]: 2025-10-10 10:02:45.394402261 +0000 UTC m=+0.067858355 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 10 06:02:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:45 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:46 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:46 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:46 np0005479822 python3.9[208304]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 06:02:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:47.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:47.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:47 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:47 np0005479822 python3.9[208457]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 10 06:02:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:48 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:48 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:48 np0005479822 systemd[1]: Stopping User Manager for UID 0...
Oct 10 06:02:48 np0005479822 systemd[207477]: Activating special unit Exit the Session...
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped target Main User Target.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped target Basic System.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped target Paths.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped target Sockets.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped target Timers.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 06:02:48 np0005479822 systemd[207477]: Closed D-Bus User Message Bus Socket.
Oct 10 06:02:48 np0005479822 systemd[207477]: Stopped Create User's Volatile Files and Directories.
Oct 10 06:02:48 np0005479822 systemd[207477]: Removed slice User Application Slice.
Oct 10 06:02:48 np0005479822 systemd[207477]: Reached target Shutdown.
Oct 10 06:02:48 np0005479822 systemd[207477]: Finished Exit the Session.
Oct 10 06:02:48 np0005479822 systemd[207477]: Reached target Exit the Session.
Oct 10 06:02:48 np0005479822 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 06:02:48 np0005479822 systemd[1]: Stopped User Manager for UID 0.
Oct 10 06:02:48 np0005479822 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 06:02:48 np0005479822 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 06:02:48 np0005479822 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 06:02:48 np0005479822 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 06:02:48 np0005479822 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 06:02:48 np0005479822 python3.9[208613]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:49.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:49 np0005479822 python3.9[208737]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090568.1351473-1341-241432994076533/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:49 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:50 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:50 np0005479822 python3.9[208890]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:50 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:51 np0005479822 python3.9[209042]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:02:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:51 np0005479822 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 06:02:51 np0005479822 systemd[1]: Stopped Load Kernel Modules.
Oct 10 06:02:51 np0005479822 systemd[1]: Stopping Load Kernel Modules...
Oct 10 06:02:51 np0005479822 systemd[1]: Starting Load Kernel Modules...
Oct 10 06:02:51 np0005479822 systemd[1]: Finished Load Kernel Modules.
Oct 10 06:02:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:51 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:51 np0005479822 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 10 06:02:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:52 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:52 np0005479822 python3.9[209200]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:02:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:52 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:52 np0005479822 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 10 06:02:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:53 np0005479822 python3.9[209353]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:53 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:54 np0005479822 python3.9[209506]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:02:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:54 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:54 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:54 np0005479822 python3.9[209682]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:02:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:02:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:02:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:55 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:55 np0005479822 python3.9[209807]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090574.3226275-1515-147727165027475/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:56 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:56 np0005479822 python3.9[209959]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:02:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:56 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:57.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:57 np0005479822 python3.9[210112]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:02:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:57 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:58 np0005479822 python3.9[210265]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:58 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:58 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:58 np0005479822 python3.9[210417]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:02:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 06:02:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 06:02:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:02:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:02:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:59.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:02:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:59 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:02:59 np0005479822 python3.9[210570]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:00 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:00 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:00 np0005479822 python3.9[210722]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:01 np0005479822 python3.9[210874]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:01.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:01 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:02 np0005479822 python3.9[211027]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:02 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:02 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:02 np0005479822 python3.9[211179]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:03:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:03 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:03 np0005479822 python3.9[211334]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:04 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:04 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:04 np0005479822 python3.9[211486]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:04 np0005479822 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 10 06:03:04 np0005479822 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 06:03:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:05.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:05 np0005479822 python3.9[211640]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:05 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:05 np0005479822 python3.9[211721]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:06 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:06 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7080000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:06 np0005479822 python3.9[211873]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:07 np0005479822 python3.9[211951]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:07 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:08 np0005479822 python3.9[212104]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:08 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:08 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:08 np0005479822 podman[212228]: 2025-10-10 10:03:08.664174894 +0000 UTC m=+0.107082340 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 10 06:03:08 np0005479822 python3.9[212274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100309 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:03:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:09 np0005479822 python3.9[212355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:09 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:10 np0005479822 python3.9[212508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:10 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:10 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:10 np0005479822 python3.9[212586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:11.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:11 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:11 np0005479822 python3.9[212739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:03:11 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:11 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:11 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:12 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:12 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:12 np0005479822 podman[212900]: 2025-10-10 10:03:12.681242361 +0000 UTC m=+0.147347455 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 06:03:12 np0005479822 python3.9[212943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:13 np0005479822 python3.9[213029]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:13 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.833790) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593833827, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 254, "total_data_size": 2976025, "memory_usage": 3016176, "flush_reason": "Manual Compaction"}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593851249, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1967335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18590, "largest_seqno": 19813, "table_properties": {"data_size": 1962011, "index_size": 2784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10656, "raw_average_key_size": 18, "raw_value_size": 1951470, "raw_average_value_size": 3387, "num_data_blocks": 125, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090484, "oldest_key_time": 1760090484, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 17569 microseconds, and 8336 cpu microseconds.
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.851357) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1967335 bytes OK
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.851380) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852809) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852835) EVENT_LOG_v1 {"time_micros": 1760090593852828, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2970163, prev total WAL file size 2970163, number of live WAL files 2.
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.854877) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1921KB)], [33(11MB)]
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593854963, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14184621, "oldest_snapshot_seqno": -1}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5005 keys, 13703013 bytes, temperature: kUnknown
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593935501, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13703013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13668057, "index_size": 21342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126978, "raw_average_key_size": 25, "raw_value_size": 13575811, "raw_average_value_size": 2712, "num_data_blocks": 878, "num_entries": 5005, "num_filter_entries": 5005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.935815) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13703013 bytes
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.937374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.9 rd, 169.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.7 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(14.2) write-amplify(7.0) OK, records in: 5527, records dropped: 522 output_compression: NoCompression
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.937406) EVENT_LOG_v1 {"time_micros": 1760090593937392, "job": 18, "event": "compaction_finished", "compaction_time_micros": 80652, "compaction_time_cpu_micros": 50102, "output_level": 6, "num_output_files": 1, "total_output_size": 13703013, "num_input_records": 5527, "num_output_records": 5005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593938480, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593942821, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.854753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:13 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:03:14 np0005479822 python3.9[213182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:14 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:14 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:14 np0005479822 python3.9[213260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:15 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:15 np0005479822 python3.9[213438]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:03:15 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:15 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:15 np0005479822 podman[213440]: 2025-10-10 10:03:15.761525271 +0000 UTC m=+0.084018177 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:03:15 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:16 np0005479822 systemd[1]: Starting Create netns directory...
Oct 10 06:03:16 np0005479822 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 06:03:16 np0005479822 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 06:03:16 np0005479822 systemd[1]: Finished Create netns directory.
Oct 10 06:03:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:16 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:16 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:17 np0005479822 python3.9[213651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:17 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7080002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:18 np0005479822 python3.9[213804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:03:18 np0005479822 python3.9[213927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090597.4055212-2136-76429710949211/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:19 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy ignored for local
Oct 10 06:03:19 np0005479822 kernel: ganesha.nfsd[206791]: segfault at 50 ip 00007f715963e32e sp 00007f711affc210 error 4 in libntirpc.so.5.8[7f7159623000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 06:03:19 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:03:19 np0005479822 systemd[1]: Started Process Core Dump (PID 214041/UID 0).
Oct 10 06:03:19 np0005479822 python3.9[214082]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:03:20 np0005479822 systemd-coredump[214053]: Process 205003 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f715963e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:03:20 np0005479822 python3.9[214234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:20 np0005479822 systemd[1]: systemd-coredump@7-214041-0.service: Deactivated successfully.
Oct 10 06:03:20 np0005479822 systemd[1]: systemd-coredump@7-214041-0.service: Consumed 1.133s CPU time.
Oct 10 06:03:20 np0005479822 podman[214239]: 2025-10-10 10:03:20.792118111 +0000 UTC m=+0.051978608 container died 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:03:20 np0005479822 systemd[1]: var-lib-containers-storage-overlay-c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c-merged.mount: Deactivated successfully.
Oct 10 06:03:20 np0005479822 podman[214239]: 2025-10-10 10:03:20.844847438 +0000 UTC m=+0.104707885 container remove 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 06:03:20 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:03:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:03:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.831s CPU time.
Oct 10 06:03:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:21 np0005479822 python3.9[214404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090600.1084583-2211-231822840879581/.source.json _original_basename=.1ag5jklu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:21.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:22 np0005479822 python3.9[214558]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:23.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:24 np0005479822 python3.9[214986]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 10 06:03:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100325 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:03:25 np0005479822 python3.9[215163]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 06:03:26 np0005479822 python3.9[215373]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 06:03:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:03:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:03:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:03:26 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:03:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:27.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:28 np0005479822 python3[215553]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 06:03:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:30 np0005479822 podman[215567]: 2025-10-10 10:03:30.122783997 +0000 UTC m=+1.277202824 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 06:03:30 np0005479822 podman[215625]: 2025-10-10 10:03:30.308976178 +0000 UTC m=+0.071672178 container create b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 06:03:30 np0005479822 podman[215625]: 2025-10-10 10:03:30.273392921 +0000 UTC m=+0.036088951 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 06:03:30 np0005479822 python3[215553]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 06:03:31 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 8.
Oct 10 06:03:31 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:03:31 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.831s CPU time.
Oct 10 06:03:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:31 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100331 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:03:31 np0005479822 python3.9[215818]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:03:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:31 np0005479822 podman[215879]: 2025-10-10 10:03:31.475965354 +0000 UTC m=+0.046119527 container create 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 06:03:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:31 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:31 np0005479822 podman[215879]: 2025-10-10 10:03:31.53556662 +0000 UTC m=+0.105720803 container init 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 06:03:31 np0005479822 podman[215879]: 2025-10-10 10:03:31.542029268 +0000 UTC m=+0.112183441 container start 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 06:03:31 np0005479822 bash[215879]: 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8
Oct 10 06:03:31 np0005479822 podman[215879]: 2025-10-10 10:03:31.458566167 +0000 UTC m=+0.028720360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:03:31 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:03:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:03:32 np0005479822 python3.9[216074]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:32 np0005479822 python3.9[216175]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:03:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:03:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:03:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:33.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:33.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:33 np0005479822 python3.9[216327]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090612.799489-2475-35260936063103/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:34 np0005479822 python3.9[216403]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 06:03:34 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:34 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:34 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:35 np0005479822 python3.9[216539]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:03:35 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:35 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:35 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:35 np0005479822 systemd[1]: Starting multipathd container...
Oct 10 06:03:35 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:03:35 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:35 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:35 np0005479822 systemd[1]: Started /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 06:03:35 np0005479822 podman[216582]: 2025-10-10 10:03:35.822585987 +0000 UTC m=+0.161435703 container init b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:03:35 np0005479822 multipathd[216598]: + sudo -E kolla_set_configs
Oct 10 06:03:35 np0005479822 podman[216582]: 2025-10-10 10:03:35.860339993 +0000 UTC m=+0.199189599 container start b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:03:35 np0005479822 podman[216582]: multipathd
Oct 10 06:03:35 np0005479822 systemd[1]: Started multipathd container.
Oct 10 06:03:35 np0005479822 multipathd[216598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 06:03:35 np0005479822 multipathd[216598]: INFO:__main__:Validating config file
Oct 10 06:03:35 np0005479822 multipathd[216598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 06:03:35 np0005479822 multipathd[216598]: INFO:__main__:Writing out command to execute
Oct 10 06:03:35 np0005479822 multipathd[216598]: ++ cat /run_command
Oct 10 06:03:35 np0005479822 multipathd[216598]: + CMD='/usr/sbin/multipathd -d'
Oct 10 06:03:35 np0005479822 multipathd[216598]: + ARGS=
Oct 10 06:03:35 np0005479822 multipathd[216598]: + sudo kolla_copy_cacerts
Oct 10 06:03:35 np0005479822 multipathd[216598]: + [[ ! -n '' ]]
Oct 10 06:03:35 np0005479822 multipathd[216598]: + . kolla_extend_start
Oct 10 06:03:35 np0005479822 multipathd[216598]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 06:03:35 np0005479822 multipathd[216598]: Running command: '/usr/sbin/multipathd -d'
Oct 10 06:03:35 np0005479822 multipathd[216598]: + umask 0022
Oct 10 06:03:35 np0005479822 multipathd[216598]: + exec /usr/sbin/multipathd -d
Oct 10 06:03:36 np0005479822 podman[216605]: 2025-10-10 10:03:35.999836383 +0000 UTC m=+0.121324152 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:03:36 np0005479822 multipathd[216598]: 3529.769287 | --------start up--------
Oct 10 06:03:36 np0005479822 multipathd[216598]: 3529.769315 | read /etc/multipath.conf
Oct 10 06:03:36 np0005479822 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 06:03:36 np0005479822 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.service: Failed with result 'exit-code'.
Oct 10 06:03:36 np0005479822 multipathd[216598]: 3529.775778 | path checkers start up
Oct 10 06:03:36 np0005479822 python3.9[216786]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:03:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:37 np0005479822 python3.9[216940]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:03:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:37 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:03:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:37 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:03:38 np0005479822 python3.9[217106]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:03:38 np0005479822 systemd[1]: Stopping multipathd container...
Oct 10 06:03:38 np0005479822 multipathd[216598]: 3532.454241 | exit (signal)
Oct 10 06:03:38 np0005479822 multipathd[216598]: 3532.454294 | --------shut down-------
Oct 10 06:03:38 np0005479822 systemd[1]: libpod-b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.scope: Deactivated successfully.
Oct 10 06:03:38 np0005479822 podman[217110]: 2025-10-10 10:03:38.721116097 +0000 UTC m=+0.072063088 container died b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:03:38 np0005479822 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.timer: Deactivated successfully.
Oct 10 06:03:38 np0005479822 systemd[1]: Stopped /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 06:03:38 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-userdata-shm.mount: Deactivated successfully.
Oct 10 06:03:38 np0005479822 systemd[1]: var-lib-containers-storage-overlay-e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977-merged.mount: Deactivated successfully.
Oct 10 06:03:38 np0005479822 podman[217125]: 2025-10-10 10:03:38.828734742 +0000 UTC m=+0.074920438 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 06:03:38 np0005479822 podman[217110]: 2025-10-10 10:03:38.91462568 +0000 UTC m=+0.265572711 container cleanup b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:03:38 np0005479822 podman[217110]: multipathd
Oct 10 06:03:39 np0005479822 podman[217159]: multipathd
Oct 10 06:03:39 np0005479822 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 10 06:03:39 np0005479822 systemd[1]: Stopped multipathd container.
Oct 10 06:03:39 np0005479822 systemd[1]: Starting multipathd container...
Oct 10 06:03:39 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:03:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:03:39 np0005479822 systemd[1]: Started /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 06:03:39 np0005479822 podman[217173]: 2025-10-10 10:03:39.199468799 +0000 UTC m=+0.157996358 container init b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:03:39 np0005479822 multipathd[217189]: + sudo -E kolla_set_configs
Oct 10 06:03:39 np0005479822 podman[217173]: 2025-10-10 10:03:39.232938938 +0000 UTC m=+0.191466467 container start b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:03:39 np0005479822 podman[217173]: multipathd
Oct 10 06:03:39 np0005479822 systemd[1]: Started multipathd container.
Oct 10 06:03:39 np0005479822 multipathd[217189]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 06:03:39 np0005479822 multipathd[217189]: INFO:__main__:Validating config file
Oct 10 06:03:39 np0005479822 multipathd[217189]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 06:03:39 np0005479822 multipathd[217189]: INFO:__main__:Writing out command to execute
Oct 10 06:03:39 np0005479822 multipathd[217189]: ++ cat /run_command
Oct 10 06:03:39 np0005479822 multipathd[217189]: + CMD='/usr/sbin/multipathd -d'
Oct 10 06:03:39 np0005479822 multipathd[217189]: + ARGS=
Oct 10 06:03:39 np0005479822 multipathd[217189]: + sudo kolla_copy_cacerts
Oct 10 06:03:39 np0005479822 multipathd[217189]: + [[ ! -n '' ]]
Oct 10 06:03:39 np0005479822 multipathd[217189]: + . kolla_extend_start
Oct 10 06:03:39 np0005479822 multipathd[217189]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 06:03:39 np0005479822 multipathd[217189]: Running command: '/usr/sbin/multipathd -d'
Oct 10 06:03:39 np0005479822 multipathd[217189]: + umask 0022
Oct 10 06:03:39 np0005479822 multipathd[217189]: + exec /usr/sbin/multipathd -d
Oct 10 06:03:39 np0005479822 podman[217196]: 2025-10-10 10:03:39.345010765 +0000 UTC m=+0.095369529 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:03:39 np0005479822 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-42fb34184ad969d6.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 06:03:39 np0005479822 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-42fb34184ad969d6.service: Failed with result 'exit-code'.
Oct 10 06:03:39 np0005479822 multipathd[217189]: 3533.121084 | --------start up--------
Oct 10 06:03:39 np0005479822 multipathd[217189]: 3533.121102 | read /etc/multipath.conf
Oct 10 06:03:39 np0005479822 multipathd[217189]: 3533.126628 | path checkers start up
Oct 10 06:03:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:40 np0005479822 python3.9[217383]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:41 np0005479822 python3.9[217535]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 06:03:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:42 np0005479822 python3.9[217688]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 10 06:03:42 np0005479822 kernel: Key type psk registered
Oct 10 06:03:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.196 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:03:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:03:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:03:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:42 np0005479822 podman[217851]: 2025-10-10 10:03:42.889875279 +0000 UTC m=+0.131425439 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 10 06:03:42 np0005479822 python3.9[217852]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:03:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:43 np0005479822 python3.9[218001]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090622.375381-2715-91983414865060/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:03:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:03:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:44 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a54000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:44 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:44 np0005479822 python3.9[218167]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:45.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:45 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:45 np0005479822 python3.9[218319]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:03:45 np0005479822 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 06:03:45 np0005479822 systemd[1]: Stopped Load Kernel Modules.
Oct 10 06:03:45 np0005479822 systemd[1]: Stopping Load Kernel Modules...
Oct 10 06:03:45 np0005479822 systemd[1]: Starting Load Kernel Modules...
Oct 10 06:03:45 np0005479822 systemd[1]: Finished Load Kernel Modules.
Oct 10 06:03:46 np0005479822 podman[218448]: 2025-10-10 10:03:46.288985952 +0000 UTC m=+0.106797483 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:03:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:46 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:46 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:46 np0005479822 python3.9[218497]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 06:03:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100347 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:03:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:47 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:47 np0005479822 python3.9[218581]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 06:03:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:48 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:48 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:49 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:50 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:50 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:51.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:51 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:52 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:52 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:53 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:53 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:53 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:53 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:54 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:54 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:54 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:54 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:54 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:54 np0005479822 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 06:03:54 np0005479822 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 06:03:54 np0005479822 lvm[218698]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 06:03:54 np0005479822 lvm[218698]: VG ceph_vg0 finished
Oct 10 06:03:54 np0005479822 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 06:03:54 np0005479822 systemd[1]: Starting man-db-cache-update.service...
Oct 10 06:03:54 np0005479822 systemd[1]: Reloading.
Oct 10 06:03:54 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:03:54 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:03:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:55 np0005479822 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 06:03:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:55 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:56 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:56 np0005479822 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 06:03:56 np0005479822 systemd[1]: Finished man-db-cache-update.service.
Oct 10 06:03:56 np0005479822 systemd[1]: man-db-cache-update.service: Consumed 1.761s CPU time.
Oct 10 06:03:56 np0005479822 systemd[1]: run-r79cd6aa339d64d0c98f1ab76bd6f0e65.service: Deactivated successfully.
Oct 10 06:03:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:56 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:03:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:57 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:57 np0005479822 python3.9[220066]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:58 np0005479822 python3.9[220217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 06:03:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:58 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:58 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:03:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:03:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:03:59 np0005479822 python3.9[220373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:03:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:03:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:03:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:59.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:03:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:59 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:00 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100400 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:04:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:00 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:00 np0005479822 python3.9[220526]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 06:04:01 np0005479822 systemd[1]: Reloading.
Oct 10 06:04:01 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:04:01 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:04:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:01.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:01 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:02 np0005479822 python3.9[220712]: ansible-ansible.builtin.service_facts Invoked
Oct 10 06:04:02 np0005479822 network[220729]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 06:04:02 np0005479822 network[220730]: 'network-scripts' will be removed from distribution in near future.
Oct 10 06:04:02 np0005479822 network[220731]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 06:04:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:02 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:02 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:04:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:03.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:04:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:03 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:04 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:04 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:05 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:06 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:06 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:07.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:07 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:08 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:08 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:09 np0005479822 podman[220984]: 2025-10-10 10:04:09.071036863 +0000 UTC m=+0.078282484 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 10 06:04:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:09 np0005479822 python3.9[221031]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:09.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:09 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:09 np0005479822 podman[221034]: 2025-10-10 10:04:09.49724192 +0000 UTC m=+0.085575483 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 10 06:04:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:04:10 np0005479822 python3.9[221205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:11 np0005479822 python3.9[221358]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:11.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:11 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:12 np0005479822 python3.9[221512]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:12 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:12 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:12 np0005479822 python3.9[221665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:04:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:04:13 np0005479822 podman[221667]: 2025-10-10 10:04:13.157876519 +0000 UTC m=+0.118237527 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:04:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:13 np0005479822 python3.9[221845]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:14 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:14 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:14 np0005479822 python3.9[221998]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:15 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:15 np0005479822 python3.9[222177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:04:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:04:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:16 np0005479822 podman[222304]: 2025-10-10 10:04:16.488647618 +0000 UTC m=+0.090804327 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:04:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:16 np0005479822 python3.9[222352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:17 np0005479822 python3.9[222504]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:17 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a24000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:18 np0005479822 python3.9[222657]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:18 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c000d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:18 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:18 np0005479822 python3.9[222809]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:19.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:19 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:19 np0005479822 python3.9[222962]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:20 np0005479822 python3.9[223114]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:20 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:20 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:21 np0005479822 python3.9[223266]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:21.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:21 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:21.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:21 np0005479822 python3.9[223419]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:22 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100422 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:04:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:22 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:22 np0005479822 python3.9[223571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:23 np0005479822 python3.9[223723]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:23 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:24 np0005479822 python3.9[223876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:24 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:24 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:24 np0005479822 python3.9[224028]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:25 np0005479822 python3.9[224180]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:25 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:25 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct 10 06:04:25 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:25.996673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:04:25 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct 10 06:04:25 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090665996719, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 931, "num_deletes": 251, "total_data_size": 2009372, "memory_usage": 2038960, "flush_reason": "Manual Compaction"}
Oct 10 06:04:25 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666007858, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1327534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19818, "largest_seqno": 20744, "table_properties": {"data_size": 1323321, "index_size": 1929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9372, "raw_average_key_size": 19, "raw_value_size": 1314853, "raw_average_value_size": 2727, "num_data_blocks": 86, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090594, "oldest_key_time": 1760090594, "file_creation_time": 1760090665, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 11259 microseconds, and 6950 cpu microseconds.
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.007930) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1327534 bytes OK
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.007957) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011482) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011537) EVENT_LOG_v1 {"time_micros": 1760090666011525, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2004680, prev total WAL file size 2004680, number of live WAL files 2.
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.012667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1296KB)], [36(13MB)]
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666012736, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15030547, "oldest_snapshot_seqno": -1}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4971 keys, 12865629 bytes, temperature: kUnknown
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666080532, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12865629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12831618, "index_size": 20461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126838, "raw_average_key_size": 25, "raw_value_size": 12740534, "raw_average_value_size": 2562, "num_data_blocks": 839, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.080851) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12865629 bytes
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.082283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.3 rd, 189.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.0) write-amplify(9.7) OK, records in: 5487, records dropped: 516 output_compression: NoCompression
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.082313) EVENT_LOG_v1 {"time_micros": 1760090666082300, "job": 20, "event": "compaction_finished", "compaction_time_micros": 67904, "compaction_time_cpu_micros": 45499, "output_level": 6, "num_output_files": 1, "total_output_size": 12865629, "num_input_records": 5487, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666082863, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666086801, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.012554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:04:26 np0005479822 python3.9[224333]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:26 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:26 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:26 np0005479822 python3.9[224485]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:27 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a24002720 fd 47 proxy ignored for local
Oct 10 06:04:27 np0005479822 kernel: ganesha.nfsd[222179]: segfault at 50 ip 00007f6b02d4732e sp 00007f6ad17f9210 error 4 in libntirpc.so.5.8[7f6b02d2c000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 06:04:27 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:04:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:27 np0005479822 systemd[1]: Started Process Core Dump (PID 224639/UID 0).
Oct 10 06:04:27 np0005479822 python3.9[224638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:04:28 np0005479822 systemd-coredump[224640]: Process 215909 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f6b02d4732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:04:28 np0005479822 python3.9[224792]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:28 np0005479822 systemd[1]: systemd-coredump@8-224639-0.service: Deactivated successfully.
Oct 10 06:04:28 np0005479822 systemd[1]: systemd-coredump@8-224639-0.service: Consumed 1.002s CPU time.
Oct 10 06:04:28 np0005479822 podman[224799]: 2025-10-10 10:04:28.672297826 +0000 UTC m=+0.042313119 container died 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 06:04:28 np0005479822 systemd[1]: var-lib-containers-storage-overlay-fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30-merged.mount: Deactivated successfully.
Oct 10 06:04:28 np0005479822 podman[224799]: 2025-10-10 10:04:28.72064858 +0000 UTC m=+0.090663843 container remove 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:04:28 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:04:28 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:04:28 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.522s CPU time.
Oct 10 06:04:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:29 np0005479822 python3.9[224990]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 06:04:30 np0005479822 python3.9[225142]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 06:04:30 np0005479822 systemd[1]: Reloading.
Oct 10 06:04:30 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:04:30 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:04:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:31 np0005479822 python3.9[225331]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:32 np0005479822 python3.9[225484]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100433 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:04:33 np0005479822 python3.9[225745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100433 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:04:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:04:34 np0005479822 python3.9[225943]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:34 np0005479822 python3.9[226096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:04:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:35 np0005479822 python3.9[226249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:36 np0005479822 python3.9[226429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:37 np0005479822 python3.9[226582]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 06:04:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:38 np0005479822 python3.9[226736]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:39 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 9.
Oct 10 06:04:39 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:04:39 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.522s CPU time.
Oct 10 06:04:39 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:04:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:39.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:39 np0005479822 podman[226915]: 2025-10-10 10:04:39.323032741 +0000 UTC m=+0.056866007 container create d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:04:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:04:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:04:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:04:39 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:04:39 np0005479822 podman[226915]: 2025-10-10 10:04:39.299126817 +0000 UTC m=+0.032960133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:04:39 np0005479822 podman[226946]: 2025-10-10 10:04:39.413006154 +0000 UTC m=+0.076916006 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 06:04:39 np0005479822 podman[226915]: 2025-10-10 10:04:39.424489849 +0000 UTC m=+0.158323115 container init d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 06:04:39 np0005479822 podman[226915]: 2025-10-10 10:04:39.432109957 +0000 UTC m=+0.165943223 container start d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 06:04:39 np0005479822 bash[226915]: d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9
Oct 10 06:04:39 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:04:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:04:39 np0005479822 python3.9[226999]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:39 np0005479822 podman[227039]: 2025-10-10 10:04:39.740464538 +0000 UTC m=+0.082695354 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd)
Oct 10 06:04:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:04:40 np0005479822 python3.9[227214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:41 np0005479822 python3.9[227368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:41 np0005479822 python3.9[227521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:04:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:04:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:04:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:42 np0005479822 python3.9[227673]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:43 np0005479822 podman[227827]: 2025-10-10 10:04:43.431804499 +0000 UTC m=+0.179669919 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:04:43 np0005479822 python3.9[227828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:44 np0005479822 python3.9[228007]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:44 np0005479822 python3.9[228159]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:04:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:04:45 np0005479822 python3.9[228312]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:46 np0005479822 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 06:04:46 np0005479822 python3.9[228464]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:46 np0005479822 podman[228588]: 2025-10-10 10:04:46.90835302 +0000 UTC m=+0.095844685 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:04:47 np0005479822 python3.9[228632]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:04:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:04:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:04:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:04:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:49.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:04:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:04:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:53 np0005479822 python3.9[228806]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 10 06:04:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:53.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:53 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:53.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:54 np0005479822 python3.9[228960]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 06:04:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:04:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 06:04:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:55.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100455 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:04:55 np0005479822 python3.9[229118]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 06:04:55 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:04:55 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:04:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100455 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:04:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:55 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:04:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:04:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:56 np0005479822 systemd-logind[789]: New session 56 of user zuul.
Oct 10 06:04:56 np0005479822 systemd[1]: Started Session 56 of User zuul.
Oct 10 06:04:56 np0005479822 systemd[1]: session-56.scope: Deactivated successfully.
Oct 10 06:04:56 np0005479822 systemd-logind[789]: Session 56 logged out. Waiting for processes to exit.
Oct 10 06:04:56 np0005479822 systemd-logind[789]: Removed session 56.
Oct 10 06:04:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:57.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:57 np0005479822 python3.9[229331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:04:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:04:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:57 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:57 np0005479822 python3.9[229455]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090696.8126714-4353-257499300507692/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:58 np0005479822 python3.9[229606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:04:59 np0005479822 python3.9[229682]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:04:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:04:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:04:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:59 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:04:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:04:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:04:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:04:59 np0005479822 python3.9[229833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:00 np0005479822 python3.9[229955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090699.3529606-4353-77381754995880/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:05:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:01 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:01 np0005479822 python3.9[230106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:02 np0005479822 python3.9[230227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090700.9652262-4353-247935045104795/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:05:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:02 np0005479822 python3.9[230378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:03 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:03.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:03 np0005479822 python3.9[230500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090702.3959923-4353-239494197958387/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:05:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:04 np0005479822 python3.9[230652]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:05:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:05.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:05 np0005479822 python3.9[230804]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:05:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:05 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:06 np0005479822 python3.9[230957]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:06 np0005479822 python3.9[231109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:07 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:07 np0005479822 python3.9[231233]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760090706.3999648-4631-207074107049293/.source _original_basename=.2_y5ig3x follow=False checksum=30e51ddc5f6ccaebeef930e549a0be2e1fe3dd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 10 06:05:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:08 np0005479822 python3.9[231385]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:09 np0005479822 python3.9[231537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:09 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:09 np0005479822 podman[231634]: 2025-10-10 10:05:09.916434297 +0000 UTC m=+0.073770690 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:05:09 np0005479822 podman[231633]: 2025-10-10 10:05:09.954629933 +0000 UTC m=+0.104770080 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 10 06:05:10 np0005479822 python3.9[231684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090708.9406898-4709-124333478308629/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:05:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:10 np0005479822 python3.9[231847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 06:05:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:11 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:11 np0005479822 python3.9[231968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090710.3358572-4754-149086951529121/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 06:05:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:11.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:12 np0005479822 python3.9[232121]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 10 06:05:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:13 np0005479822 python3.9[232273]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 06:05:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:13 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:14 np0005479822 podman[232304]: 2025-10-10 10:05:14.016148208 +0000 UTC m=+0.122444643 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:05:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:14 np0005479822 python3[232452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 06:05:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:15.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:15 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:17 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:19 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:20 np0005479822 podman[232533]: 2025-10-10 10:05:20.951350489 +0000 UTC m=+3.055921527 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:05:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:21 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:21.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:05:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3867 writes, 21K keys, 3867 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 3867 writes, 3867 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1408 writes, 6822 keys, 1408 commit groups, 1.0 writes per commit group, ingest: 16.38 MB, 0.03 MB/s#012Interval WAL: 1408 writes, 1408 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.9      0.22              0.11        10    0.022       0      0       0.0       0.0#012  L6      1/0   12.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    202.8    172.1      0.65              0.37         9    0.072     43K   4821       0.0       0.0#012 Sum      1/0   12.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    151.7    165.8      0.87              0.48        19    0.046     43K   4821       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.4    161.2    161.6      0.39              0.23         8    0.049     22K   2562       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    202.8    172.1      0.65              0.37         9    0.072     43K   4821       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.6      0.22              0.11         9    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.9 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 8.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(481,8.41 MB,2.76645%) FilterBlock(19,130.05 KB,0.041776%) IndexBlock(19,240.70 KB,0.0773229%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 06:05:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:23 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:23.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:25 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:26 np0005479822 podman[232465]: 2025-10-10 10:05:26.203358372 +0000 UTC m=+11.634791103 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 06:05:26 np0005479822 podman[232600]: 2025-10-10 10:05:26.348822254 +0000 UTC m=+0.048193730 container create aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 06:05:26 np0005479822 podman[232600]: 2025-10-10 10:05:26.322432211 +0000 UTC m=+0.021803677 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 06:05:26 np0005479822 python3[232452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 10 06:05:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:27 np0005479822 python3.9[232790]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:27 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:28 np0005479822 python3.9[232945]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 10 06:05:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:29 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:29.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:29 np0005479822 python3.9[233098]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 06:05:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:30 np0005479822 python3[233250]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 06:05:31 np0005479822 podman[233286]: 2025-10-10 10:05:31.174777315 +0000 UTC m=+0.061010682 container create 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true)
Oct 10 06:05:31 np0005479822 podman[233286]: 2025-10-10 10:05:31.143671613 +0000 UTC m=+0.029904980 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 06:05:31 np0005479822 python3[233250]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 10 06:05:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:05:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:05:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:31 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:32 np0005479822 python3.9[233478]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:33 np0005479822 python3.9[233632]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:05:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:33 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:34 np0005479822 python3.9[233784]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090733.2445624-5030-68184296990557/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 06:05:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:34 np0005479822 python3.9[233860]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 06:05:34 np0005479822 systemd[1]: Reloading.
Oct 10 06:05:34 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:05:34 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:05:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:35.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:35 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:35 np0005479822 python3.9[233971]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 06:05:35 np0005479822 systemd[1]: Reloading.
Oct 10 06:05:35 np0005479822 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 06:05:35 np0005479822 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 06:05:36 np0005479822 systemd[1]: Starting nova_compute container...
Oct 10 06:05:36 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:05:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:36 np0005479822 podman[234036]: 2025-10-10 10:05:36.208443451 +0000 UTC m=+0.119580474 container init 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:05:36 np0005479822 podman[234036]: 2025-10-10 10:05:36.220715437 +0000 UTC m=+0.131852430 container start 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:05:36 np0005479822 podman[234036]: nova_compute
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + sudo -E kolla_set_configs
Oct 10 06:05:36 np0005479822 systemd[1]: Started nova_compute container.
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Validating config file
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying service configuration files
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Deleting /etc/ceph
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Creating directory /etc/ceph
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Writing out command to execute
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:36 np0005479822 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 06:05:36 np0005479822 nova_compute[234052]: ++ cat /run_command
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + CMD=nova-compute
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + ARGS=
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + sudo kolla_copy_cacerts
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + [[ ! -n '' ]]
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + . kolla_extend_start
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 06:05:36 np0005479822 nova_compute[234052]: Running command: 'nova-compute'
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + umask 0022
Oct 10 06:05:36 np0005479822 nova_compute[234052]: + exec nova-compute
Oct 10 06:05:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:37 np0005479822 python3.9[234213]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:37 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:37.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:38 np0005479822 python3.9[234365]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.630 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.774 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:05:38 np0005479822 nova_compute[234052]: 2025-10-10 10:05:38.807 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.292 2 INFO nova.virt.driver [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 10 06:05:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.467 2 INFO nova.compute.provider_config [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.479 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 python3.9[234543]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.551 2 WARNING oslo_config.cfg [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 06:05:39 np0005479822 nova_compute[234052]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 06:05:39 np0005479822 nova_compute[234052]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 06:05:39 np0005479822 nova_compute[234052]: and ``live_migration_inbound_addr`` respectively.
Oct 10 06:05:39 np0005479822 nova_compute[234052]: ).  Its value may be silently ignored in the future.#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.622 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.639 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.639 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 10 06:05:39 np0005479822 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 06:05:39 np0005479822 systemd[1]: Started libvirt QEMU daemon.
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.702 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd597d93e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.704 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd597d93e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.705 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.719 2 WARNING nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct 10 06:05:39 np0005479822 nova_compute[234052]: 2025-10-10 10:05:39.719 2 DEBUG nova.virt.libvirt.volume.mount [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 10 06:05:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:40 np0005479822 podman[234758]: 2025-10-10 10:05:40.165747393 +0000 UTC m=+0.094324464 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 10 06:05:40 np0005479822 podman[234752]: 2025-10-10 10:05:40.167705037 +0000 UTC m=+0.095650800 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 06:05:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:40 np0005479822 python3.9[234843]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.582 2 INFO nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 
Oct 10 06:05:40 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <host>
Oct 10 06:05:40 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <uuid>b3ce5971-8a21-4607-a1ce-4c5a00fcffdd</uuid>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <arch>x86_64</arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model>EPYC-Rome-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <vendor>AMD</vendor>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <microcode version='16777317'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <signature family='23' model='49' stepping='0'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='x2apic'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='tsc-deadline'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='osxsave'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='hypervisor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='tsc_adjust'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='spec-ctrl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='stibp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='arch-capabilities'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='cmp_legacy'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='topoext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='virt-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='lbrv'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='tsc-scale'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='vmcb-clean'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='pause-filter'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='pfthreshold'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='svme-addr-chk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='rdctl-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='skip-l1dfl-vmentry'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='mds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature name='pschange-mc-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <pages unit='KiB' size='4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <pages unit='KiB' size='2048'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <pages unit='KiB' size='1048576'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <power_management>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <suspend_mem/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </power_management>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <iommu support='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <migration_features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <live/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <uri_transports>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <uri_transport>tcp</uri_transport>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <uri_transport>rdma</uri_transport>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </uri_transports>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </migration_features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <topology>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <cells num='1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <cell id='0'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <memory unit='KiB'>7864356</memory>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <pages unit='KiB' size='2048'>0</pages>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <distances>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <sibling id='0' value='10'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          </distances>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          <cpus num='8'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:          </cpus>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        </cell>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </cells>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </topology>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <cache>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </cache>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <secmodel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model>selinux</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <doi>0</doi>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </secmodel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <secmodel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model>dac</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <doi>0</doi>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </secmodel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </host>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <guest>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <os_type>hvm</os_type>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <arch name='i686'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <wordsize>32</wordsize>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <domain type='qemu'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <domain type='kvm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <pae/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <nonpae/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <acpi default='on' toggle='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <apic default='on' toggle='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <cpuselection/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <deviceboot/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <disksnapshot default='on' toggle='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <externalSnapshot/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </guest>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <guest>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <os_type>hvm</os_type>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <arch name='x86_64'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <wordsize>64</wordsize>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <domain type='qemu'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <domain type='kvm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <acpi default='on' toggle='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <apic default='on' toggle='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <cpuselection/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <deviceboot/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <disksnapshot default='on' toggle='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <externalSnapshot/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </guest>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 
Oct 10 06:05:40 np0005479822 nova_compute[234052]: </capabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: #033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.593 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 06:05:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.632 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 06:05:40 np0005479822 nova_compute[234052]: <domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <domain>kvm</domain>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <arch>i686</arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <vcpu max='240'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <iothreads supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <os supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='firmware'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <loader supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>rom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pflash</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='readonly'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>yes</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='secure'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </loader>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </os>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='maximumMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <vendor>AMD</vendor>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='succor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='custom' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-128'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-256'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-512'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <memoryBacking supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='sourceType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>file</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>anonymous</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>memfd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </memoryBacking>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <disk supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='diskDevice'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>disk</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cdrom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>floppy</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>lun</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ide</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>fdc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>sata</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </disk>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <graphics supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vnc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egl-headless</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>dbus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </graphics>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <video supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='modelType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vga</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cirrus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>none</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>bochs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ramfb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </video>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hostdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='mode'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>subsystem</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='startupPolicy'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>mandatory</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>requisite</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>optional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='subsysType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pci</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='capsType'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='pciBackend'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hostdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <rng supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>random</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </rng>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <filesystem supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='driverType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>path</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>handle</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtiofs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </filesystem>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <tpm supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-tis</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-crb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emulator</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>external</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendVersion'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>2.0</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </tpm>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <redirdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </redirdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <channel supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pty</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>unix</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </channel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <crypto supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>qemu</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </crypto>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <interface supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>passt</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </interface>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <panic supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>isa</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>hyperv</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </panic>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <gic supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <genid supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backup supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <async-teardown supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <ps2 supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sev supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sgx supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hyperv supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='features'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>relaxed</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vapic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>spinlocks</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vpindex</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>runtime</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>synic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>stimer</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reset</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vendor_id</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>frequencies</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reenlightenment</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tlbflush</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ipi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>avic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emsr_bitmap</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>xmm_input</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hyperv>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <launchSecurity supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: </domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 06:05:40 np0005479822 nova_compute[234052]: <domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <domain>kvm</domain>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <arch>i686</arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <vcpu max='4096'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <iothreads supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <os supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='firmware'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <loader supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>rom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pflash</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='readonly'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>yes</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='secure'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </loader>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </os>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='maximumMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <vendor>AMD</vendor>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='succor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='custom' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-128'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-256'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-512'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <memoryBacking supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='sourceType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>file</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>anonymous</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>memfd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </memoryBacking>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <disk supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='diskDevice'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>disk</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cdrom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>floppy</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>lun</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>fdc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>sata</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </disk>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <graphics supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vnc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egl-headless</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>dbus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </graphics>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <video supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='modelType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vga</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cirrus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>none</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>bochs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ramfb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </video>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hostdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='mode'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>subsystem</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='startupPolicy'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>mandatory</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>requisite</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>optional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='subsysType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pci</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='capsType'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='pciBackend'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hostdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <rng supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>random</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </rng>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <filesystem supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='driverType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>path</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>handle</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtiofs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </filesystem>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <tpm supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-tis</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-crb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emulator</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>external</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendVersion'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>2.0</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </tpm>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <redirdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </redirdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <channel supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pty</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>unix</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </channel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <crypto supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>qemu</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </crypto>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <interface supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>passt</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </interface>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <panic supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>isa</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>hyperv</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </panic>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <gic supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <genid supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backup supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <async-teardown supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <ps2 supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sev supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sgx supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hyperv supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='features'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>relaxed</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vapic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>spinlocks</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vpindex</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>runtime</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>synic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>stimer</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reset</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vendor_id</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>frequencies</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reenlightenment</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tlbflush</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ipi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>avic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emsr_bitmap</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>xmm_input</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hyperv>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <launchSecurity supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: </domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.692 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.697 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 06:05:40 np0005479822 nova_compute[234052]: <domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <domain>kvm</domain>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <arch>x86_64</arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <vcpu max='240'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <iothreads supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <os supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='firmware'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <loader supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>rom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pflash</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='readonly'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>yes</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='secure'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </loader>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </os>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='maximumMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <vendor>AMD</vendor>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='succor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='custom' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-128'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-256'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-512'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <memoryBacking supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='sourceType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>file</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>anonymous</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>memfd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </memoryBacking>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <disk supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='diskDevice'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>disk</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cdrom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>floppy</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>lun</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ide</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>fdc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>sata</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </disk>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <graphics supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vnc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egl-headless</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>dbus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </graphics>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <video supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='modelType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vga</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cirrus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>none</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>bochs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ramfb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </video>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hostdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='mode'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>subsystem</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='startupPolicy'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>mandatory</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>requisite</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>optional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='subsysType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pci</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='capsType'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='pciBackend'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hostdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <rng supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>random</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </rng>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <filesystem supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='driverType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>path</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>handle</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtiofs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </filesystem>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <tpm supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-tis</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-crb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emulator</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>external</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendVersion'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>2.0</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </tpm>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <redirdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </redirdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <channel supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pty</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>unix</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </channel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <crypto supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>qemu</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </crypto>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <interface supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>passt</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </interface>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <panic supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>isa</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>hyperv</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </panic>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <gic supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <genid supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backup supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <async-teardown supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <ps2 supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sev supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sgx supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hyperv supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='features'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>relaxed</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vapic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>spinlocks</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vpindex</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>runtime</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>synic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>stimer</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reset</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vendor_id</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>frequencies</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reenlightenment</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tlbflush</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ipi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>avic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emsr_bitmap</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>xmm_input</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hyperv>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <launchSecurity supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: </domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.757 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 06:05:40 np0005479822 nova_compute[234052]: <domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <domain>kvm</domain>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <arch>x86_64</arch>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <vcpu max='4096'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <iothreads supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <os supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='firmware'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>efi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <loader supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>rom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pflash</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='readonly'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>yes</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='secure'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>yes</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>no</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </loader>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </os>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='maximumMigratable'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>on</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>off</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <vendor>AMD</vendor>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='succor'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <mode name='custom' supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Denverton-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='auto-ibrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amd-psfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='stibp-always-on'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='EPYC-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-128'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-256'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx10-512'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='prefetchiti'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Haswell-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512er'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512pf'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fma4'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tbm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xop'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='amx-tile'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-bf16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-fp16'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bitalg'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrc'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fzrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='la57'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='taa-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xfd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ifma'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cmpccxadd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fbsdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='fsrs'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ibrs-all'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mcdt-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pbrsb-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='psdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='serialize'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vaes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='hle'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='rtm'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512bw'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512cd'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512dq'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512f'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='avx512vl'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='invpcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pcid'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='pku'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='mpx'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='core-capability'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='split-lock-detect'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='cldemote'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='erms'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='gfni'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdir64b'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='movdiri'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='xsaves'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='athlon-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='core2duo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='coreduo-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='n270-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='ss'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <blockers model='phenom-v1'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnow'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <feature name='3dnowext'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </blockers>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </mode>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </cpu>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <memoryBacking supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <enum name='sourceType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>file</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>anonymous</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <value>memfd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </memoryBacking>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <disk supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='diskDevice'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>disk</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cdrom</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>floppy</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>lun</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>fdc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>sata</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </disk>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <graphics supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vnc</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egl-headless</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>dbus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </graphics>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <video supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='modelType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vga</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>cirrus</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>none</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>bochs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ramfb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </video>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hostdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='mode'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>subsystem</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='startupPolicy'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>mandatory</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>requisite</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>optional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='subsysType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pci</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>scsi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='capsType'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='pciBackend'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hostdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <rng supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtio-non-transitional</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>random</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>egd</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </rng>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <filesystem supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='driverType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>path</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>handle</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>virtiofs</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </filesystem>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <tpm supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-tis</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tpm-crb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emulator</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>external</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendVersion'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>2.0</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </tpm>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <redirdev supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='bus'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>usb</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </redirdev>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <channel supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>pty</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>unix</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </channel>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <crypto supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='type'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>qemu</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendModel'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>builtin</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </crypto>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <interface supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='backendType'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>default</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>passt</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </interface>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <panic supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='model'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>isa</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>hyperv</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </panic>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </devices>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  <features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <gic supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <genid supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <backup supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <async-teardown supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <ps2 supported='yes'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sev supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <sgx supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <hyperv supported='yes'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      <enum name='features'>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>relaxed</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vapic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>spinlocks</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vpindex</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>runtime</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>synic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>stimer</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reset</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>vendor_id</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>frequencies</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>reenlightenment</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>tlbflush</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>ipi</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>avic</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>emsr_bitmap</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:        <value>xmm_input</value>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:      </enum>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    </hyperv>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:    <launchSecurity supported='no'/>
Oct 10 06:05:40 np0005479822 nova_compute[234052]:  </features>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: </domainCapabilities>
Oct 10 06:05:40 np0005479822 nova_compute[234052]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.820 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.821 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.821 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.821 2 INFO nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Secure Boot support detected#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.825 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.826 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.844 2 DEBUG nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.868 2 INFO nova.virt.node [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Determined node identity c9b2c4a3-cb19-4387-8719-36027e3cdaec from /var/lib/nova/compute_id#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.885 2 WARNING nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Compute nodes ['c9b2c4a3-cb19-4387-8719-36027e3cdaec'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.921 2 INFO nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.957 2 WARNING nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.957 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.957 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG nova.compute.resource_tracker [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:05:40 np0005479822 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG oslo_concurrency.processutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:05:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:41.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:05:41 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2390731396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:05:41 np0005479822 nova_compute[234052]: 2025-10-10 10:05:41.428 2 DEBUG oslo_concurrency.processutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:05:41 np0005479822 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 06:05:41 np0005479822 systemd[1]: Started libvirt nodedev daemon.
Oct 10 06:05:41 np0005479822 python3.9[235046]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 06:05:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:41 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:41 np0005479822 systemd[1]: Stopping nova_compute container...
Oct 10 06:05:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:41.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:41 np0005479822 nova_compute[234052]: 2025-10-10 10:05:41.659 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:05:41 np0005479822 nova_compute[234052]: 2025-10-10 10:05:41.660 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:05:41 np0005479822 nova_compute[234052]: 2025-10-10 10:05:41.661 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:05:42 np0005479822 virtqemud[234629]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 10 06:05:42 np0005479822 virtqemud[234629]: hostname: compute-1
Oct 10 06:05:42 np0005479822 virtqemud[234629]: End of file while reading data: Input/output error
Oct 10 06:05:42 np0005479822 systemd[1]: libpod-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e.scope: Deactivated successfully.
Oct 10 06:05:42 np0005479822 systemd[1]: libpod-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e.scope: Consumed 3.558s CPU time.
Oct 10 06:05:42 np0005479822 podman[235073]: 2025-10-10 10:05:42.030192203 +0000 UTC m=+0.421326236 container died 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:05:42 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e-userdata-shm.mount: Deactivated successfully.
Oct 10 06:05:42 np0005479822 systemd[1]: var-lib-containers-storage-overlay-b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45-merged.mount: Deactivated successfully.
Oct 10 06:05:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:05:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.199 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:05:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.199 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:05:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:05:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:42 np0005479822 podman[235073]: 2025-10-10 10:05:42.619524215 +0000 UTC m=+1.010658258 container cleanup 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:05:42 np0005479822 podman[235073]: nova_compute
Oct 10 06:05:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:42 np0005479822 podman[235103]: nova_compute
Oct 10 06:05:42 np0005479822 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 10 06:05:42 np0005479822 systemd[1]: Stopped nova_compute container.
Oct 10 06:05:42 np0005479822 systemd[1]: Starting nova_compute container...
Oct 10 06:05:42 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:05:42 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:42 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:42 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:42 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:42 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:42 np0005479822 podman[235116]: 2025-10-10 10:05:42.865941421 +0000 UTC m=+0.124717396 container init 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:05:42 np0005479822 podman[235116]: 2025-10-10 10:05:42.87977989 +0000 UTC m=+0.138555865 container start 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:05:42 np0005479822 podman[235116]: nova_compute
Oct 10 06:05:42 np0005479822 nova_compute[235132]: + sudo -E kolla_set_configs
Oct 10 06:05:42 np0005479822 systemd[1]: Started nova_compute container.
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Validating config file
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying service configuration files
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /etc/ceph
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Creating directory /etc/ceph
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Writing out command to execute
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:42 np0005479822 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 06:05:43 np0005479822 nova_compute[235132]: ++ cat /run_command
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + CMD=nova-compute
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + ARGS=
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + sudo kolla_copy_cacerts
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + [[ ! -n '' ]]
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + . kolla_extend_start
Oct 10 06:05:43 np0005479822 nova_compute[235132]: Running command: 'nova-compute'
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + umask 0022
Oct 10 06:05:43 np0005479822 nova_compute[235132]: + exec nova-compute
Oct 10 06:05:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:43 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:44 np0005479822 podman[235269]: 2025-10-10 10:05:44.275369394 +0000 UTC m=+0.111801772 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 06:05:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:44 np0005479822 python3.9[235315]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 06:05:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:44 np0005479822 systemd[1]: Started libpod-conmon-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope.
Oct 10 06:05:44 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:05:44 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:44 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:44 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 10 06:05:44 np0005479822 podman[235345]: 2025-10-10 10:05:44.778696023 +0000 UTC m=+0.155424436 container init aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:05:44 np0005479822 podman[235345]: 2025-10-10 10:05:44.785870039 +0000 UTC m=+0.162598382 container start aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:05:44 np0005479822 python3.9[235315]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Applying nova statedir ownership
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 10 06:05:44 np0005479822 nova_compute_init[235369]: INFO:nova_statedir:Nova statedir ownership complete
Oct 10 06:05:44 np0005479822 systemd[1]: libpod-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope: Deactivated successfully.
Oct 10 06:05:44 np0005479822 podman[235370]: 2025-10-10 10:05:44.862055615 +0000 UTC m=+0.042063413 container died aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:05:44 np0005479822 nova_compute[235132]: 2025-10-10 10:05:44.871 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:44 np0005479822 nova_compute[235132]: 2025-10-10 10:05:44.872 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:44 np0005479822 nova_compute[235132]: 2025-10-10 10:05:44.872 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 10 06:05:44 np0005479822 nova_compute[235132]: 2025-10-10 10:05:44.872 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 10 06:05:44 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c-userdata-shm.mount: Deactivated successfully.
Oct 10 06:05:44 np0005479822 systemd[1]: var-lib-containers-storage-overlay-0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1-merged.mount: Deactivated successfully.
Oct 10 06:05:44 np0005479822 podman[235383]: 2025-10-10 10:05:44.941911331 +0000 UTC m=+0.074642085 container cleanup aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct 10 06:05:44 np0005479822 systemd[1]: libpod-conmon-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope: Deactivated successfully.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.003 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.034 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:05:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:05:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.483 2 INFO nova.virt.driver [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 10 06:05:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:05:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:05:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:05:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.629 2 INFO nova.compute.provider_config [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 systemd[1]: session-54.scope: Deactivated successfully.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 systemd[1]: session-54.scope: Consumed 3min 13.374s CPU time.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 systemd-logind[789]: Removed session 54.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.724 2 WARNING oslo_config.cfg [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 06:05:45 np0005479822 nova_compute[235132]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 06:05:45 np0005479822 nova_compute[235132]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 06:05:45 np0005479822 nova_compute[235132]: and ``live_migration_inbound_addr`` respectively.
Oct 10 06:05:45 np0005479822 nova_compute[235132]: ).  Its value may be silently ignored in the future.#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.789 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.814 2 INFO nova.virt.node [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Determined node identity c9b2c4a3-cb19-4387-8719-36027e3cdaec from /var/lib/nova/compute_id#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.816 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.829 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f57c1491b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.831 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f57c1491b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.832 2 INFO nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.838 2 INFO nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <host>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <uuid>b3ce5971-8a21-4607-a1ce-4c5a00fcffdd</uuid>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <cpu>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <arch>x86_64</arch>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model>EPYC-Rome-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <vendor>AMD</vendor>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <microcode version='16777317'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <signature family='23' model='49' stepping='0'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='x2apic'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='tsc-deadline'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='osxsave'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='hypervisor'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='tsc_adjust'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='spec-ctrl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='stibp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='arch-capabilities'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='cmp_legacy'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='topoext'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='virt-ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='lbrv'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='tsc-scale'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='vmcb-clean'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='pause-filter'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='pfthreshold'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='svme-addr-chk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='rdctl-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='skip-l1dfl-vmentry'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='mds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature name='pschange-mc-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <pages unit='KiB' size='4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <pages unit='KiB' size='2048'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <pages unit='KiB' size='1048576'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </cpu>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <power_management>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <suspend_mem/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </power_management>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <iommu support='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <migration_features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <live/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <uri_transports>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <uri_transport>tcp</uri_transport>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <uri_transport>rdma</uri_transport>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </uri_transports>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </migration_features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <topology>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <cells num='1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <cell id='0'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <memory unit='KiB'>7864356</memory>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <pages unit='KiB' size='4'>1966089</pages>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <pages unit='KiB' size='2048'>0</pages>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <distances>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <sibling id='0' value='10'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          </distances>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          <cpus num='8'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:          </cpus>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        </cell>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </cells>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </topology>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <cache>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </cache>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <secmodel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model>selinux</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <doi>0</doi>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </secmodel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <secmodel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model>dac</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <doi>0</doi>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </secmodel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </host>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <guest>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <os_type>hvm</os_type>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <arch name='i686'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <wordsize>32</wordsize>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <domain type='qemu'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <domain type='kvm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </arch>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <pae/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <nonpae/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <acpi default='on' toggle='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <apic default='on' toggle='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <cpuselection/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <deviceboot/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <disksnapshot default='on' toggle='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <externalSnapshot/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </guest>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <guest>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <os_type>hvm</os_type>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <arch name='x86_64'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <wordsize>64</wordsize>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <domain type='qemu'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <domain type='kvm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </arch>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <acpi default='on' toggle='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <apic default='on' toggle='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <cpuselection/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <deviceboot/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <disksnapshot default='on' toggle='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <externalSnapshot/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </guest>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 
Oct 10 06:05:45 np0005479822 nova_compute[235132]: </capabilities>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: #033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.847 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.853 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 06:05:45 np0005479822 nova_compute[235132]: <domainCapabilities>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <domain>kvm</domain>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <arch>i686</arch>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <vcpu max='240'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <iothreads supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <os supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <enum name='firmware'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <loader supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>rom</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>pflash</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='readonly'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>yes</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='secure'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </loader>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <cpu>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='maximumMigratable'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <vendor>AMD</vendor>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='succor'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='custom' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-128'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-256'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-512'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SierraForest'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Snowridge'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='athlon'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='athlon-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='core2duo'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='core2duo-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='coreduo'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='coreduo-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='n270'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='n270-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='phenom'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='phenom-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <memoryBacking supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <enum name='sourceType'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <value>file</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <value>anonymous</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <value>memfd</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </memoryBacking>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <disk supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='diskDevice'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>disk</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>cdrom</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>floppy</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>lun</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>ide</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>fdc</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>sata</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <graphics supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>vnc</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>egl-headless</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>dbus</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <video supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='modelType'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>vga</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>cirrus</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>none</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>bochs</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>ramfb</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <hostdev supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='mode'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>subsystem</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='startupPolicy'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>mandatory</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>requisite</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>optional</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='subsysType'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>pci</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='capsType'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='pciBackend'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </hostdev>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <rng supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>random</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>egd</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <filesystem supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='driverType'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>path</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>handle</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>virtiofs</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </filesystem>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <tpm supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>tpm-tis</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>tpm-crb</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>emulator</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>external</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='backendVersion'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>2.0</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </tpm>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <redirdev supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </redirdev>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <channel supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>pty</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>unix</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </channel>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <crypto supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='model'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>qemu</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </crypto>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <interface supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='backendType'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>passt</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <panic supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>isa</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>hyperv</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </panic>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <gic supported='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <genid supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <backup supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <async-teardown supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <ps2 supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <sev supported='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <sgx supported='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <hyperv supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='features'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>relaxed</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>vapic</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>spinlocks</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>vpindex</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>runtime</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>synic</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>stimer</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>reset</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>vendor_id</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>frequencies</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>reenlightenment</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>tlbflush</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>ipi</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>avic</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>emsr_bitmap</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>xmm_input</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </hyperv>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <launchSecurity supported='no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: </domainCapabilities>
Oct 10 06:05:45 np0005479822 nova_compute[235132]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:45 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.861 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 06:05:45 np0005479822 nova_compute[235132]: <domainCapabilities>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <domain>kvm</domain>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <arch>i686</arch>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <vcpu max='4096'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <iothreads supported='yes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <os supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <enum name='firmware'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <loader supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>rom</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>pflash</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='readonly'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>yes</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='secure'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </loader>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:  <cpu>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <enum name='maximumMigratable'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <vendor>AMD</vendor>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='succor'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:    <mode name='custom' supported='yes'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v3'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v4'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-128'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-256'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx10-512'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:45 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SierraForest'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='athlon'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='athlon-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='core2duo'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='core2duo-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='coreduo'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='coreduo-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='n270'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='n270-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='phenom'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='phenom-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <memoryBacking supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <enum name='sourceType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>file</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>anonymous</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>memfd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </memoryBacking>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <disk supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='diskDevice'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>disk</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>cdrom</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>floppy</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>lun</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>fdc</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>sata</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <graphics supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vnc</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>egl-headless</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>dbus</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <video supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='modelType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vga</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>cirrus</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>none</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>bochs</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>ramfb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <hostdev supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='mode'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>subsystem</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='startupPolicy'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>mandatory</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>requisite</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>optional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='subsysType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pci</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='capsType'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='pciBackend'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </hostdev>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <rng supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>random</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>egd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <filesystem supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='driverType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>path</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>handle</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtiofs</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </filesystem>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <tpm supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tpm-tis</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tpm-crb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>emulator</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>external</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendVersion'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>2.0</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </tpm>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <redirdev supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </redirdev>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <channel supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pty</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>unix</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </channel>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <crypto supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>qemu</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </crypto>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <interface supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>passt</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <panic supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>isa</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>hyperv</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </panic>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <gic supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <genid supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <backup supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <async-teardown supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <ps2 supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <sev supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <sgx supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <hyperv supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='features'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>relaxed</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vapic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>spinlocks</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vpindex</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>runtime</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>synic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>stimer</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>reset</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vendor_id</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>frequencies</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>reenlightenment</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tlbflush</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>ipi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>avic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>emsr_bitmap</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>xmm_input</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </hyperv>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <launchSecurity supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:05:46 np0005479822 nova_compute[235132]: </domainCapabilities>
Oct 10 06:05:46 np0005479822 nova_compute[235132]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:46 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.912 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 10 06:05:46 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.913 2 DEBUG nova.virt.libvirt.volume.mount [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 10 06:05:46 np0005479822 nova_compute[235132]: 2025-10-10 10:05:45.919 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 06:05:46 np0005479822 nova_compute[235132]: <domainCapabilities>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <domain>kvm</domain>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <arch>x86_64</arch>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <vcpu max='240'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <iothreads supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <os supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <enum name='firmware'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <loader supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>rom</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pflash</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='readonly'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>yes</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='secure'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </loader>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <cpu>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='maximumMigratable'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <vendor>AMD</vendor>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='succor'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='custom' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-128'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-256'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-512'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='IvyBridge-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='KnightsMill-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4fmaps'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-4vnniw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512er'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512pf'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G4-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Opteron_G5-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fma4'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tbm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xop'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SapphireRapids-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SierraForest'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='SierraForest-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-ne-convert'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cmpccxadd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Client-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Skylake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='core-capability'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='split-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Snowridge-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='athlon'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='athlon-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='core2duo'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='core2duo-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='coreduo'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='coreduo-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='n270'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='n270-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='phenom'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='phenom-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnow'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='3dnowext'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <memoryBacking supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <enum name='sourceType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>file</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>anonymous</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>memfd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </memoryBacking>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <disk supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='diskDevice'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>disk</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>cdrom</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>floppy</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>lun</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>ide</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>fdc</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>sata</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <graphics supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vnc</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>egl-headless</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>dbus</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <video supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='modelType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vga</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>cirrus</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>none</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>bochs</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>ramfb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <hostdev supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='mode'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>subsystem</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='startupPolicy'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>mandatory</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>requisite</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>optional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='subsysType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pci</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>scsi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='capsType'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='pciBackend'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </hostdev>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <rng supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtio-non-transitional</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>random</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>egd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <filesystem supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='driverType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>path</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>handle</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>virtiofs</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </filesystem>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <tpm supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tpm-tis</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tpm-crb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>emulator</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>external</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendVersion'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>2.0</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </tpm>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <redirdev supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='bus'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>usb</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </redirdev>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <channel supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pty</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>unix</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </channel>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <crypto supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>qemu</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendModel'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>builtin</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </crypto>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <interface supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='backendType'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>default</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>passt</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <panic supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='model'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>isa</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>hyperv</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </panic>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <gic supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <vmcoreinfo supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <genid supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <backingStoreInput supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <backup supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <async-teardown supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <ps2 supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <sev supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <sgx supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <hyperv supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='features'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>relaxed</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vapic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>spinlocks</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vpindex</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>runtime</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>synic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>stimer</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>reset</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>vendor_id</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>frequencies</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>reenlightenment</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>tlbflush</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>ipi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>avic</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>emsr_bitmap</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>xmm_input</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </hyperv>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <launchSecurity supported='no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:05:46 np0005479822 nova_compute[235132]: </domainCapabilities>
Oct 10 06:05:46 np0005479822 nova_compute[235132]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 10 06:05:46 np0005479822 nova_compute[235132]: 2025-10-10 10:05:46.014 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 06:05:46 np0005479822 nova_compute[235132]: <domainCapabilities>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <path>/usr/libexec/qemu-kvm</path>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <domain>kvm</domain>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <arch>x86_64</arch>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <vcpu max='4096'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <iothreads supported='yes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <os supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <enum name='firmware'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>efi</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <loader supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='type'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>rom</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>pflash</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='readonly'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>yes</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='secure'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>yes</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>no</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </loader>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:  <cpu>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='host-passthrough' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='hostPassthroughMigratable'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='maximum' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <enum name='maximumMigratable'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>on</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <value>off</value>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </enum>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='host-model' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <vendor>AMD</vendor>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='x2apic'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-deadline'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='hypervisor'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc_adjust'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='spec-ctrl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='stibp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='arch-capabilities'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='cmp_legacy'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='overflow-recov'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='succor'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='amd-ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='virt-ssbd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='lbrv'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='tsc-scale'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='vmcb-clean'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='flushbyasid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pause-filter'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pfthreshold'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='svme-addr-chk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='rdctl-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='mds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='gds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='require' name='rfds-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <feature policy='disable' name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    </mode>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:    <mode name='custom' supported='yes'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Broadwell-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cascadelake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Cooperlake-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mpx'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Denverton-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Dhyana-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Genoa-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='auto-ibrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Milan-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amd-psfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='no-nested-data-bp'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='null-sel-clr-base'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='stibp-always-on'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-Rome-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='EPYC-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='GraniteRapids-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-int8'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='amx-tile'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx-vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-128'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-256'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx10-512'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-bf16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-fp16'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='bus-lock-detect'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='cldemote'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fbsdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrc'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrs'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fzrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='mcdt-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdir64b'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='movdiri'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pbrsb-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='prefetchiti'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='psdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='sbdr-ssdp-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='serialize'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ss'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='tsx-ldtrk'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xfd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Haswell-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-noTSX'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v1'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='hle'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='rtm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v2'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v3'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v4'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v5'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v6'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512dq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512f'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512ifma'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vbmi2'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vl'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512vnni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='erms'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='fsrm'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='gfni'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='ibrs-all'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='invpcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='la57'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pcid'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='pku'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='taa-no'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vaes'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='vpclmulqdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='xsaves'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      </blockers>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:      <blockers model='Icelake-Server-v7'>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512-vpopcntdq'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bitalg'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512bw'/>
Oct 10 06:05:46 np0005479822 nova_compute[235132]:        <feature name='avx512cd'/>
Oct 10 06:07:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:05.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:06 np0005479822 rsyslogd[1005]: imjournal: 1338 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 10 06:07:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:07.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100707 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:07:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:09.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:07:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:07:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:11.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:12 np0005479822 podman[236186]: 2025-10-10 10:07:12.968258945 +0000 UTC m=+0.063388477 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 10 06:07:13 np0005479822 podman[236185]: 2025-10-10 10:07:13.003604834 +0000 UTC m=+0.097753540 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 10 06:07:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100713 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:07:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:16 np0005479822 podman[236228]: 2025-10-10 10:07:16.047780188 +0000 UTC m=+0.143800052 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 10 06:07:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:07:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:17.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:17.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:19.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:07:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:07:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:19.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:21 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:07:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100722 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:07:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:23 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:25 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:27 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:29.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:29 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:29.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:29 np0005479822 podman[236288]: 2025-10-10 10:07:29.98989772 +0000 UTC m=+0.086756589 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 10 06:07:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:07:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:33 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:07:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:07:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:35.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:35 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:37 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:07:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:07:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:41 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.201 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:07:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.201 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:07:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:07:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:07:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:43 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:43 np0005479822 podman[236343]: 2025-10-10 10:07:43.980798485 +0000 UTC m=+0.073785812 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:07:43 np0005479822 podman[236342]: 2025-10-10 10:07:43.980691332 +0000 UTC m=+0.064030365 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:07:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:07:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.401 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.402 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.423 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.423 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.458 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.458 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:07:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:07:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4016404872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:07:46 np0005479822 nova_compute[235132]: 2025-10-10 10:07:46.986 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:07:47 np0005479822 podman[236401]: 2025-10-10 10:07:47.008687193 +0000 UTC m=+0.118091337 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.165 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.166 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5223MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.167 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.167 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.241 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.242 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.270 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:07:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:47 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:07:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3155772583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.741 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.748 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.763 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.765 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:07:47 np0005479822 nova_compute[235132]: 2025-10-10 10:07:47.765 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:07:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.386 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.386 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.387 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.406 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:07:48 np0005479822 nova_compute[235132]: 2025-10-10 10:07:48.408 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:07:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:07:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:07:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100748 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:07:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:49.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:49 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:51.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:07:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:53 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:07:53 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:07:53 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:07:53 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:07:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:53 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:07:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:07:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:55.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100755 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:07:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:55 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:07:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:57.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.549539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877549589, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2357, "num_deletes": 251, "total_data_size": 6246893, "memory_usage": 6341456, "flush_reason": "Manual Compaction"}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877574398, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4069084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20749, "largest_seqno": 23101, "table_properties": {"data_size": 4059605, "index_size": 5973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19602, "raw_average_key_size": 20, "raw_value_size": 4040653, "raw_average_value_size": 4165, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090666, "oldest_key_time": 1760090666, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 24906 microseconds, and 17074 cpu microseconds.
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574448) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4069084 bytes OK
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574470) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575934) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575960) EVENT_LOG_v1 {"time_micros": 1760090877575952, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6236419, prev total WAL file size 6272944, number of live WAL files 2.
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.578839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3973KB)], [39(12MB)]
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877578918, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16934713, "oldest_snapshot_seqno": -1}
Oct 10 06:07:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5421 keys, 14721572 bytes, temperature: kUnknown
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877656117, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14721572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14683316, "index_size": 23618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136712, "raw_average_key_size": 25, "raw_value_size": 14583102, "raw_average_value_size": 2690, "num_data_blocks": 976, "num_entries": 5421, "num_filter_entries": 5421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.656613) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14721572 bytes
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.658076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.0 rd, 190.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 5941, records dropped: 520 output_compression: NoCompression
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.658106) EVENT_LOG_v1 {"time_micros": 1760090877658093, "job": 22, "event": "compaction_finished", "compaction_time_micros": 77344, "compaction_time_cpu_micros": 53724, "output_level": 6, "num_output_files": 1, "total_output_size": 14721572, "num_input_records": 5941, "num_output_records": 5421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877659542, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877664351, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.578683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:07:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:57.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:07:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:07:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:59.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:07:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:59 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:07:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:07:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:07:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:59.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:00 np0005479822 podman[236589]: 2025-10-10 10:08:00.962476634 +0000 UTC m=+0.066916374 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:08:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:01 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:03.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:03.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:05.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:05 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:07.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:07 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:14 np0005479822 podman[236619]: 2025-10-10 10:08:14.968192578 +0000 UTC m=+0.073146506 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:08:14 np0005479822 podman[236620]: 2025-10-10 10:08:14.999777424 +0000 UTC m=+0.087981243 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 10 06:08:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:17.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:18 np0005479822 podman[236687]: 2025-10-10 10:08:18.014277414 +0000 UTC m=+0.106204131 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:08:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:19.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.604 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:08:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.606 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:08:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.607 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:08:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:19.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:21.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:21 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:23.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:23 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:25 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:25.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:27 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:29 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:31 np0005479822 podman[236720]: 2025-10-10 10:08:31.977308616 +0000 UTC m=+0.078061230 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 10 06:08:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:33 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:33.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:35 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:37 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:39.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:41.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:41 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:08:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:08:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:08:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:43 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:45 np0005479822 podman[236773]: 2025-10-10 10:08:45.967146535 +0000 UTC m=+0.067249257 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 06:08:45 np0005479822 podman[236774]: 2025-10-10 10:08:45.966795816 +0000 UTC m=+0.066900538 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:08:46 np0005479822 nova_compute[235132]: 2025-10-10 10:08:46.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:46 np0005479822 nova_compute[235132]: 2025-10-10 10:08:46.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:08:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:08:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3922966226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.556 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:08:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:47 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.719 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.721 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5231MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.722 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.722 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.831 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:08:47 np0005479822 nova_compute[235132]: 2025-10-10 10:08:47.848 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:08:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:08:48 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2549355807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:08:48 np0005479822 nova_compute[235132]: 2025-10-10 10:08:48.366 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:08:48 np0005479822 nova_compute[235132]: 2025-10-10 10:08:48.372 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:08:48 np0005479822 nova_compute[235132]: 2025-10-10 10:08:48.387 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:08:48 np0005479822 nova_compute[235132]: 2025-10-10 10:08:48.388 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:08:48 np0005479822 nova_compute[235132]: 2025-10-10 10:08:48.388 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:08:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:49 np0005479822 podman[236858]: 2025-10-10 10:08:49.029362796 +0000 UTC m=+0.133250140 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.388 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.388 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.389 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.403 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.404 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.405 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:08:49 np0005479822 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:08:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:49 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:53 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:55.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:55 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:08:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100857 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:08:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:57.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:08:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:08:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:57.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:08:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 06:08:58 np0005479822 podman[237040]: 2025-10-10 10:08:58.920939184 +0000 UTC m=+0.110674121 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:08:59 np0005479822 podman[237040]: 2025-10-10 10:08:59.042901308 +0000 UTC m=+0.232636225 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 06:08:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:08:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:59.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:08:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:59 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:08:59 np0005479822 podman[237163]: 2025-10-10 10:08:59.740106696 +0000 UTC m=+0.070252608 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:08:59 np0005479822 podman[237163]: 2025-10-10 10:08:59.747771844 +0000 UTC m=+0.077917736 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:08:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:08:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:08:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:00 np0005479822 podman[237253]: 2025-10-10 10:09:00.148177347 +0000 UTC m=+0.077369130 container exec 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 06:09:00 np0005479822 podman[237253]: 2025-10-10 10:09:00.162963527 +0000 UTC m=+0.092155330 container exec_died 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 06:09:00 np0005479822 podman[237319]: 2025-10-10 10:09:00.471379216 +0000 UTC m=+0.080791142 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:09:00 np0005479822 podman[237319]: 2025-10-10 10:09:00.506881025 +0000 UTC m=+0.116292891 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:09:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:00 np0005479822 podman[237388]: 2025-10-10 10:09:00.830178997 +0000 UTC m=+0.078120501 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2)
Oct 10 06:09:00 np0005479822 podman[237388]: 2025-10-10 10:09:00.907642968 +0000 UTC m=+0.155584422 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, version=2.2.4)
Oct 10 06:09:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:01 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 06:09:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:01 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:02 np0005479822 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 06:09:02 np0005479822 podman[237504]: 2025-10-10 10:09:02.363868107 +0000 UTC m=+0.080832644 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:09:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:03.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:05.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:05 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:09:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:09:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:07 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:09.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:09:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:09:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:09:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:11.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:09:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:13.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:15.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80027e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:16 np0005479822 podman[237582]: 2025-10-10 10:09:16.882304905 +0000 UTC m=+0.091450692 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 06:09:16 np0005479822 podman[237581]: 2025-10-10 10:09:16.887795723 +0000 UTC m=+0.096230841 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 06:09:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100919 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:09:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80027e0 fd 39 proxy ignored for local
Oct 10 06:09:19 np0005479822 kernel: ganesha.nfsd[237556]: segfault at 50 ip 00007f009720d32e sp 00007f005bffe210 error 4 in libntirpc.so.5.8[7f00971f2000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 10 06:09:19 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:09:19 np0005479822 systemd[1]: Started Process Core Dump (PID 237624/UID 0).
Oct 10 06:09:19 np0005479822 podman[237625]: 2025-10-10 10:09:19.85027628 +0000 UTC m=+0.126157928 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:09:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:20 np0005479822 systemd-coredump[237626]: Process 236106 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007f009720d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:09:21 np0005479822 systemd[1]: systemd-coredump@10-237624-0.service: Deactivated successfully.
Oct 10 06:09:21 np0005479822 systemd[1]: systemd-coredump@10-237624-0.service: Consumed 1.290s CPU time.
Oct 10 06:09:21 np0005479822 podman[237656]: 2025-10-10 10:09:21.140051263 +0000 UTC m=+0.029485167 container died 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:09:21 np0005479822 systemd[1]: var-lib-containers-storage-overlay-52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949-merged.mount: Deactivated successfully.
Oct 10 06:09:21 np0005479822 podman[237656]: 2025-10-10 10:09:21.18624493 +0000 UTC m=+0.075678814 container remove 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:09:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:09:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:09:21 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.924s CPU time.
Oct 10 06:09:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:09:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:27.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:31 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 11.
Oct 10 06:09:31 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:09:31 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.924s CPU time.
Oct 10 06:09:31 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:09:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:31 np0005479822 podman[237753]: 2025-10-10 10:09:31.942849119 +0000 UTC m=+0.056926959 container create bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:09:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:32 np0005479822 podman[237753]: 2025-10-10 10:09:31.914083652 +0000 UTC m=+0.028161532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:09:32 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:09:32 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:09:32 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:09:32 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:09:32 np0005479822 podman[237753]: 2025-10-10 10:09:32.03395986 +0000 UTC m=+0.148037770 container init bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:09:32 np0005479822 podman[237753]: 2025-10-10 10:09:32.0461651 +0000 UTC m=+0.160242940 container start bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:09:32 np0005479822 bash[237753]: bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:09:32 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:09:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:09:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:32 np0005479822 podman[237811]: 2025-10-10 10:09:32.990903914 +0000 UTC m=+0.090482766 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:09:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:34.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:35.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:38.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:38 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:09:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:38 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:09:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:40.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:09:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:09:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:09:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.185879) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984185930, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 250, "total_data_size": 3195446, "memory_usage": 3259680, "flush_reason": "Manual Compaction"}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984198670, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1331552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23106, "largest_seqno": 24398, "table_properties": {"data_size": 1327081, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11513, "raw_average_key_size": 20, "raw_value_size": 1317447, "raw_average_value_size": 2340, "num_data_blocks": 86, "num_entries": 563, "num_filter_entries": 563, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090877, "oldest_key_time": 1760090877, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12931 microseconds, and 7547 cpu microseconds.
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.198805) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1331552 bytes OK
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.198858) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199884) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199902) EVENT_LOG_v1 {"time_micros": 1760090984199895, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3189273, prev total WAL file size 3189273, number of live WAL files 2.
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.201269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1300KB)], [42(14MB)]
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984201302, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16053124, "oldest_snapshot_seqno": -1}
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5513 keys, 12707796 bytes, temperature: kUnknown
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984270652, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12707796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12672046, "index_size": 20856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 138924, "raw_average_key_size": 25, "raw_value_size": 12573408, "raw_average_value_size": 2280, "num_data_blocks": 855, "num_entries": 5513, "num_filter_entries": 5513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.271569) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12707796 bytes
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.273427) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.4 rd, 181.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(21.6) write-amplify(9.5) OK, records in: 5984, records dropped: 471 output_compression: NoCompression
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.273467) EVENT_LOG_v1 {"time_micros": 1760090984273453, "job": 24, "event": "compaction_finished", "compaction_time_micros": 69980, "compaction_time_cpu_micros": 28714, "output_level": 6, "num_output_files": 1, "total_output_size": 12707796, "num_input_records": 5984, "num_output_records": 5513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984274075, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984277104, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.201215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:45 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - - [10/Oct/2025:10:09:46.574 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.001000026s
Oct 10 06:09:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:46 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a10001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:46 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a10001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:47 np0005479822 nova_compute[235132]: 2025-10-10 10:09:47.047 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:47 np0005479822 nova_compute[235132]: 2025-10-10 10:09:47.048 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100947 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:09:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:47 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14002070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:47.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:47 np0005479822 podman[237878]: 2025-10-10 10:09:47.997148433 +0000 UTC m=+0.085482300 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 10 06:09:47 np0005479822 podman[237879]: 2025-10-10 10:09:47.997184734 +0000 UTC m=+0.090198348 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 06:09:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:48.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.038 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.067 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.100 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.101 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.102 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.102 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.103 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:09:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:09:48 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2615600125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.588 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:09:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:48 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.779 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.781 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5227MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.866 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.867 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:09:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:48 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0000ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:09:48 np0005479822 nova_compute[235132]: 2025-10-10 10:09:48.881 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:09:49 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:09:49 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4222935933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:09:49 np0005479822 nova_compute[235132]: 2025-10-10 10:09:49.346 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:09:49 np0005479822 nova_compute[235132]: 2025-10-10 10:09:49.354 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:09:49 np0005479822 nova_compute[235132]: 2025-10-10 10:09:49.373 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:09:49 np0005479822 nova_compute[235132]: 2025-10-10 10:09:49.376 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:09:49 np0005479822 nova_compute[235132]: 2025-10-10 10:09:49.376 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:09:49 np0005479822 kernel: ganesha.nfsd[237863]: segfault at 50 ip 00007f8abd3e232e sp 00007f8a7fffe210 error 4 in libntirpc.so.5.8[7f8abd3c7000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 10 06:09:49 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:09:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:49 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a100029b0 fd 39 proxy ignored for local
Oct 10 06:09:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:49 np0005479822 systemd[1]: Started Process Core Dump (PID 237964/UID 0).
Oct 10 06:09:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:50 np0005479822 podman[237966]: 2025-10-10 10:09:50.038228116 +0000 UTC m=+0.135362207 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.354 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.354 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.355 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.368 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.369 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:09:50 np0005479822 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:09:50 np0005479822 systemd-coredump[237965]: Process 237773 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007f8abd3e232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:09:51 np0005479822 systemd[1]: systemd-coredump@11-237964-0.service: Deactivated successfully.
Oct 10 06:09:51 np0005479822 systemd[1]: systemd-coredump@11-237964-0.service: Consumed 1.336s CPU time.
Oct 10 06:09:51 np0005479822 podman[237996]: 2025-10-10 10:09:51.169253581 +0000 UTC m=+0.026298641 container died bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:09:51 np0005479822 systemd[1]: var-lib-containers-storage-overlay-22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680-merged.mount: Deactivated successfully.
Oct 10 06:09:51 np0005479822 podman[237996]: 2025-10-10 10:09:51.22550436 +0000 UTC m=+0.082549440 container remove bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 06:09:51 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:09:51 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Oct 10 06:09:51 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:09:51 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.767s CPU time.
Oct 10 06:09:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:51.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:09:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Oct 10 06:09:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Oct 10 06:09:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Oct 10 06:09:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Oct 10 06:09:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100955 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:09:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:09:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:09:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:09:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:09:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:58.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:09:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:09:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:09:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:00 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 06:10:01 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 12.
Oct 10 06:10:01 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:10:01 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.767s CPU time.
Oct 10 06:10:01 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:10:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:01.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:01 np0005479822 podman[238123]: 2025-10-10 10:10:01.947220699 +0000 UTC m=+0.049698393 container create 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct 10 06:10:01 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:10:01 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:10:01 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:10:01 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:10:02 np0005479822 podman[238123]: 2025-10-10 10:10:02.014055894 +0000 UTC m=+0.116533618 container init 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 06:10:02 np0005479822 podman[238123]: 2025-10-10 10:10:01.924050154 +0000 UTC m=+0.026527888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:10:02 np0005479822 podman[238123]: 2025-10-10 10:10:02.020624581 +0000 UTC m=+0.123102275 container start 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:10:02 np0005479822 bash[238123]: 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:10:02 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:10:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:02.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:10:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:10:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:04 np0005479822 podman[238181]: 2025-10-10 10:10:04.006688049 +0000 UTC m=+0.105665914 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 06:10:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:04.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:06.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:10:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:10:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:08.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:10:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:10:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:10:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:11.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:12.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:13.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:14.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 06:10:14 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:14.637 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:10:14 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:14.638 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:16.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:17 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:17 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:10:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101017 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:10:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:10:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:10:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:18.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:18 np0005479822 podman[238353]: 2025-10-10 10:10:18.960729187 +0000 UTC m=+0.058717236 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:10:18 np0005479822 podman[238354]: 2025-10-10 10:10:18.969023281 +0000 UTC m=+0.067170395 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.204184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019204211, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 711, "num_deletes": 257, "total_data_size": 1346065, "memory_usage": 1366672, "flush_reason": "Manual Compaction"}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019214635, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 870498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24403, "largest_seqno": 25109, "table_properties": {"data_size": 867026, "index_size": 1316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7925, "raw_average_key_size": 18, "raw_value_size": 859782, "raw_average_value_size": 2008, "num_data_blocks": 58, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090985, "oldest_key_time": 1760090985, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 10509 microseconds, and 4385 cpu microseconds.
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.214685) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 870498 bytes OK
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.214708) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220486) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220510) EVENT_LOG_v1 {"time_micros": 1760091019220504, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220532) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1342151, prev total WAL file size 1342151, number of live WAL files 2.
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.221058) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(850KB)], [45(12MB)]
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019221094, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13578294, "oldest_snapshot_seqno": -1}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5410 keys, 13423246 bytes, temperature: kUnknown
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019278925, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13423246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13387091, "index_size": 21517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138016, "raw_average_key_size": 25, "raw_value_size": 13289148, "raw_average_value_size": 2456, "num_data_blocks": 879, "num_entries": 5410, "num_filter_entries": 5410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.279155) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13423246 bytes
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.280700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.4 rd, 231.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.1 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(31.0) write-amplify(15.4) OK, records in: 5941, records dropped: 531 output_compression: NoCompression
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.280715) EVENT_LOG_v1 {"time_micros": 1760091019280708, "job": 26, "event": "compaction_finished", "compaction_time_micros": 57917, "compaction_time_cpu_micros": 30449, "output_level": 6, "num_output_files": 1, "total_output_size": 13423246, "num_input_records": 5941, "num_output_records": 5410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019281028, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019283181, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.221015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:10:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:19.640 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:20.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:21 np0005479822 podman[238391]: 2025-10-10 10:10:21.041559894 +0000 UTC m=+0.136983491 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 06:10:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:24.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:10:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:10:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:30.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.344 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.344 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.374 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 06:10:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.491 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.492 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.499 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.500 2 INFO nova.compute.claims [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 10 06:10:32 np0005479822 nova_compute[235132]: 2025-10-10 10:10:32.633 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:10:33 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328175243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.030 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.036 2 DEBUG nova.compute.provider_tree [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.060 2 DEBUG nova.scheduler.client.report [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.087 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.087 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.139 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.140 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.173 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.201 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.321 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.323 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.323 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating image(s)#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.367 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.408 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.444 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.448 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:33 np0005479822 nova_compute[235132]: 2025-10-10 10:10:33.449 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:34 np0005479822 nova_compute[235132]: 2025-10-10 10:10:34.153 2 DEBUG nova.virt.libvirt.imagebackend [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image locations are: [{'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct 10 06:10:34 np0005479822 nova_compute[235132]: 2025-10-10 10:10:34.532 2 WARNING oslo_policy.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 10 06:10:34 np0005479822 nova_compute[235132]: 2025-10-10 10:10:34.533 2 WARNING oslo_policy.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 10 06:10:34 np0005479822 nova_compute[235132]: 2025-10-10 10:10:34.540 2 DEBUG nova.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:10:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:34 np0005479822 podman[238501]: 2025-10-10 10:10:34.96157498 +0000 UTC m=+0.061774280 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.256 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.339 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.340 2 DEBUG nova.virt.images [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] 5ae78700-970d-45b4-a57d-978a054c7519 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.341 2 DEBUG nova.privsep.utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.342 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.524 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.533 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.613 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.615 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.659 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:35 np0005479822 nova_compute[235132]: 2025-10-10 10:10:35.664 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 b8379f65-91e0-45a5-a245-a1bc27260f20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:35.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:35 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct 10 06:10:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:36 np0005479822 nova_compute[235132]: 2025-10-10 10:10:36.570 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Successfully created port: 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:10:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:36 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.332 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 b8379f65-91e0-45a5-a245-a1bc27260f20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.448 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.595 2 DEBUG nova.objects.instance [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.622 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.623 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Ensure instance console log exists: /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.623 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.624 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:37 np0005479822 nova_compute[235132]: 2025-10-10 10:10:37.624 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:38 np0005479822 nova_compute[235132]: 2025-10-10 10:10:38.717 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Successfully updated port: 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:10:38 np0005479822 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:10:38 np0005479822 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:10:38 np0005479822 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:10:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101038 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:10:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:39 np0005479822 nova_compute[235132]: 2025-10-10 10:10:39.085 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 06:10:39 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct 10 06:10:39 np0005479822 nova_compute[235132]: 2025-10-10 10:10:39.318 2 DEBUG nova.compute.manager [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:10:39 np0005479822 nova_compute[235132]: 2025-10-10 10:10:39.318 2 DEBUG nova.compute.manager [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:10:39 np0005479822 nova_compute[235132]: 2025-10-10 10:10:39.319 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:10:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.200 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.223 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.223 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance network_info: |[{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.224 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.225 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.230 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start _get_guest_xml network_info=[{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.238 2 WARNING nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.255 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.256 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.260 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.264 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.264 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.270 2 DEBUG nova.privsep.utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.270 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:40 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:10:40 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/809446276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.696 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.733 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:40 np0005479822 nova_compute[235132]: 2025-10-10 10:10:40.740 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:10:41 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1280200693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.219 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.222 2 DEBUG nova.virt.libvirt.vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:10:33Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.223 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.224 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.228 2 DEBUG nova.objects.instance [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.247 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] End _get_guest_xml xml=<domain type="kvm">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <uuid>b8379f65-91e0-45a5-a245-a1bc27260f20</uuid>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <name>instance-00000001</name>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <memory>131072</memory>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <vcpu>1</vcpu>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:name>tempest-TestNetworkBasicOps-server-1823645149</nova:name>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:creationTime>2025-10-10 10:10:40</nova:creationTime>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:flavor name="m1.nano">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:memory>128</nova:memory>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:disk>1</nova:disk>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:swap>0</nova:swap>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:vcpus>1</nova:vcpus>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </nova:flavor>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:owner>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </nova:owner>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <nova:ports>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <nova:port uuid="3281ffe2-3fe8-4217-bcda-e7f8c55f5dae">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        </nova:port>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </nova:ports>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </nova:instance>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <sysinfo type="smbios">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="manufacturer">RDO</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="product">OpenStack Compute</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="serial">b8379f65-91e0-45a5-a245-a1bc27260f20</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="uuid">b8379f65-91e0-45a5-a245-a1bc27260f20</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <entry name="family">Virtual Machine</entry>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <boot dev="hd"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <smbios mode="sysinfo"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <vmcoreinfo/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <clock offset="utc">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <timer name="hpet" present="no"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <cpu mode="host-model" match="exact">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <disk type="network" device="disk">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/b8379f65-91e0-45a5-a245-a1bc27260f20_disk">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <target dev="vda" bus="virtio"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <disk type="network" device="cdrom">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <target dev="sda" bus="sata"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <interface type="ethernet">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <mac address="fa:16:3e:f6:78:9d"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <mtu size="1442"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <target dev="tap3281ffe2-3f"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <serial type="pty">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <log file="/var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/console.log" append="off"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <input type="tablet" bus="usb"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <rng model="virtio">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <backend model="random">/dev/urandom</backend>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <controller type="usb" index="0"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    <memballoon model="virtio">
Oct 10 06:10:41 np0005479822 nova_compute[235132]:      <stats period="10"/>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:10:41 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:10:41 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:10:41 np0005479822 nova_compute[235132]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.248 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Preparing to wait for external event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.248 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.249 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.249 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.250 2 DEBUG nova.virt.libvirt.vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:10:33Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.251 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.252 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.252 2 DEBUG os_vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.308 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.308 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.309 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:10:41 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.326 2 INFO oslo.privsep.daemon [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpy733erh3/privsep.sock']#033[00m
Oct 10 06:10:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.003 2 INFO oslo.privsep.daemon [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.867 521 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.872 521 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.874 521 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:41.875 521 INFO oslo.privsep.daemon [-] privsep daemon running as pid 521#033[00m
Oct 10 06:10:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.204 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.205 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.205 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3281ffe2-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3281ffe2-3f, col_values=(('external_ids', {'iface-id': '3281ffe2-3fe8-4217-bcda-e7f8c55f5dae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:78:9d', 'vm-uuid': 'b8379f65-91e0-45a5-a245-a1bc27260f20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:42 np0005479822 NetworkManager[44982]: <info>  [1760091042.3441] manager: (tap3281ffe2-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.354 2 INFO os_vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f')#033[00m
Oct 10 06:10:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.418 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.419 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.420 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:f6:78:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.421 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Using config drive#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.462 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:42 np0005479822 nova_compute[235132]: 2025-10-10 10:10:42.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:43 np0005479822 nova_compute[235132]: 2025-10-10 10:10:43.556 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:10:43 np0005479822 nova_compute[235132]: 2025-10-10 10:10:43.556 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:10:43 np0005479822 nova_compute[235132]: 2025-10-10 10:10:43.576 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:10:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.544 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating config drive at /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config#033[00m
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.551 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_w0m0kgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.702 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_w0m0kgf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.747 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.753 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.951 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:44 np0005479822 nova_compute[235132]: 2025-10-10 10:10:44.953 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deleting local config drive /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config because it was imported into RBD.#033[00m
Oct 10 06:10:44 np0005479822 systemd[1]: Starting libvirt secret daemon...
Oct 10 06:10:45 np0005479822 systemd[1]: Started libvirt secret daemon.
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.081 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.082 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.082 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.104 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:45 np0005479822 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 10 06:10:45 np0005479822 kernel: tap3281ffe2-3f: entered promiscuous mode
Oct 10 06:10:45 np0005479822 NetworkManager[44982]: <info>  [1760091045.1115] manager: (tap3281ffe2-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 10 06:10:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:45Z|00027|binding|INFO|Claiming lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for this chassis.
Oct 10 06:10:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:45Z|00028|binding|INFO|3281ffe2-3fe8-4217-bcda-e7f8c55f5dae: Claiming fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:45 np0005479822 systemd-udevd[238834]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.169 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:9d 10.100.0.6'], port_security=['fa:16:3e:f6:78:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8379f65-91e0-45a5-a245-a1bc27260f20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9213b2d5-68f1-49a1-a3cf-ea56345963fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf25de6-ad2e-407a-bd52-f4f32badc3ec, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.170 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae in datapath bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 bound to our chassis#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.172 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.173 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyjp0jsis/privsep.sock']#033[00m
Oct 10 06:10:45 np0005479822 NetworkManager[44982]: <info>  [1760091045.1781] device (tap3281ffe2-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:10:45 np0005479822 NetworkManager[44982]: <info>  [1760091045.1789] device (tap3281ffe2-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:10:45 np0005479822 systemd-machined[191637]: New machine qemu-1-instance-00000001.
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:45 np0005479822 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 10 06:10:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:45Z|00029|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae ovn-installed in OVS
Oct 10 06:10:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:45Z|00030|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae up in Southbound
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:45.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.860 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.861 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyjp0jsis/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.747 238898 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.752 238898 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.754 238898 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.754 238898 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238898#033[00m
Oct 10 06:10:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.864 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cda263-6ce8-4638-a73e-1265b72a5d22]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.929 2 DEBUG nova.compute.manager [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:45 np0005479822 nova_compute[235132]: 2025-10-10 10:10:45.931 2 DEBUG nova.compute.manager [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Processing event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 06:10:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.183 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.182954, b8379f65-91e0-45a5-a245-a1bc27260f20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.184 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Started (Lifecycle Event)#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.187 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.192 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.196 2 INFO nova.virt.libvirt.driver [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance spawned successfully.#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.196 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.248 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.253 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.263 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.263 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.264 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.264 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.265 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.265 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.273 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.273 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.183129, b8379f65-91e0-45a5-a245-a1bc27260f20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.273 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Paused (Lifecycle Event)#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.307 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.311 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.1907501, b8379f65-91e0-45a5-a245-a1bc27260f20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.312 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Resumed (Lifecycle Event)#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.325 2 INFO nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 13.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.326 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.333 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.336 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:10:46 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.557 238898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:46 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.561 238898 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:46 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.561 238898 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.688 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.738 2 INFO nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 14.27 seconds to build instance.#033[00m
Oct 10 06:10:46 np0005479822 nova_compute[235132]: 2025-10-10 10:10:46.760 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.288 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1066f9-2aac-456d-ab48-6178219806ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.289 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc8bfbd1-b1 in ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.291 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc8bfbd1-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.292 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9146cef0-a189-43e8-a5dd-ebe39cb54735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.296 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9966277b-9347-4e5f-b078-764edeb201e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.330 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[78be8b0a-6626-46a1-a0ab-84bd5fb4ab75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:47 np0005479822 nova_compute[235132]: 2025-10-10 10:10:47.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.361 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3698945f-a38c-4d70-862b-02f4ad0b584a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:47 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.364 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpni3167tp/privsep.sock']#033[00m
Oct 10 06:10:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:47 np0005479822 nova_compute[235132]: 2025-10-10 10:10:47.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:47.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.009 2 WARNING nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received unexpected event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with vm_state active and task_state None.#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.077 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.078 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpni3167tp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.921 238913 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.926 238913 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.928 238913 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.928 238913 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238913#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.082 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a142bc6c-8499-462f-87cc-58b9cb8382ef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.114 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.115 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:48 np0005479822 nova_compute[235132]: 2025-10-10 10:10:48.115 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.067 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.197 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[4b17a7d2-41e2-457d-8aac-bfca96ef594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.203 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[dabf970e-d5d0-4ccb-8194-15291a6ee46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 NetworkManager[44982]: <info>  [1760091049.2056] manager: (tapbc8bfbd1-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.249 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd28f54-e7f0-46fe-aaa8-dbb6d586cd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.255 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a83cfd7b-68ea-41fa-aedb-e103a13ce0fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 systemd-udevd[238962]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:10:49 np0005479822 NetworkManager[44982]: <info>  [1760091049.2844] device (tapbc8bfbd1-b0): carrier: link connected
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.290 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[f568382a-730f-4484-a4d4-98f62825cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.316 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[52145dbf-c7e7-4853-937e-d58a6e913cc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc8bfbd1-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:59:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396299, 'reachable_time': 20753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238970, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.332 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cddbb7-fc3e-4f4b-80ff-44a8b600ff00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:5920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396299, 'tstamp': 396299}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238998, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 podman[238944]: 2025-10-10 10:10:49.338662117 +0000 UTC m=+0.094353910 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.350 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2790fd6b-e8ab-4e44-a359-106a8dbeb61c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc8bfbd1-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:59:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396299, 'reachable_time': 20753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239001, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 podman[238941]: 2025-10-10 10:10:49.361372288 +0000 UTC m=+0.120801934 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.387 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a4306a50-ecb6-47cb-a863-87076f8ef224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.454 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd6f323-912c-49bd-9dcb-3474cc23f1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.456 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8bfbd1-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.456 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.457 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc8bfbd1-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:49 np0005479822 NetworkManager[44982]: <info>  [1760091049.4596] manager: (tapbc8bfbd1-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 10 06:10:49 np0005479822 kernel: tapbc8bfbd1-b0: entered promiscuous mode
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.465 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc8bfbd1-b0, col_values=(('external_ids', {'iface-id': '39ed96bc-4f7e-4f78-812d-fbc3e55cd01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:49 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:49Z|00031|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.469 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.477 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[951fdabc-b446-4932-8a34-f52a8efc3bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.479 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:10:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.479 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'env', 'PROCESS_TAG=haproxy-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:49 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:10:49 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2896383186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.565 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.695 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.696 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:10:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:49.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.881 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4810MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:10:49 np0005479822 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:10:49 np0005479822 podman[239040]: 2025-10-10 10:10:49.910957885 +0000 UTC m=+0.060967298 container create 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:10:49 np0005479822 systemd[1]: Started libpod-conmon-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope.
Oct 10 06:10:49 np0005479822 podman[239040]: 2025-10-10 10:10:49.881052577 +0000 UTC m=+0.031061970 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:10:49 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:10:50 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cdc93c5c72ed4f0d25c08523e3110ee9304b40ab9c09ff8991fb32bfd66f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:10:50 np0005479822 podman[239040]: 2025-10-10 10:10:50.018592907 +0000 UTC m=+0.168602290 container init 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 06:10:50 np0005479822 podman[239040]: 2025-10-10 10:10:50.028156418 +0000 UTC m=+0.178165801 container start 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.033 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance b8379f65-91e0-45a5-a245-a1bc27260f20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.034 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.034 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:10:50 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : New worker (239062) forked
Oct 10 06:10:50 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : Loading success.
Oct 10 06:10:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.130 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.213 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.213 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.228 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.255 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.288 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:10:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:10:50 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3253010054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.749 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.759 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.823 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updated inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.824 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.824 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.851 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:10:50 np0005479822 nova_compute[235132]: 2025-10-10 10:10:50.852 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:10:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:10:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:10:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:51 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:51Z|00032|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1680] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1685] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1694] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1696] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1702] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1707] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1710] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 06:10:51 np0005479822 NetworkManager[44982]: <info>  [1760091051.1712] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:51 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:51Z|00033|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.852 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.853 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:10:51 np0005479822 nova_compute[235132]: 2025-10-10 10:10:51.854 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:10:51 np0005479822 podman[239095]: 2025-10-10 10:10:51.872524845 +0000 UTC m=+0.119933270 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:10:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.134 2 DEBUG nova.compute.manager [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.135 2 DEBUG nova.compute.manager [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.136 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.136 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.137 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.205 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:52 np0005479822 nova_compute[235132]: 2025-10-10 10:10:52.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5344001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:10:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:10:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:53.999 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:54.000 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:54.024 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:54.026 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:54.027 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:10:54 np0005479822 nova_compute[235132]: 2025-10-10 10:10:54.028 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:10:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.342 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.366 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.367 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.368 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.368 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:55 np0005479822 nova_compute[235132]: 2025-10-10 10:10:55.369 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:10:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:57 np0005479822 nova_compute[235132]: 2025-10-10 10:10:57.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:10:57 np0005479822 nova_compute[235132]: 2025-10-10 10:10:57.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:10:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:10:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:10:57 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 06:10:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:10:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:10:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101058 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:10:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:59 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 06:10:59 np0005479822 ovn_controller[131749]: 2025-10-10T10:10:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 06:10:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:10:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:10:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:10:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:11:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:02 np0005479822 nova_compute[235132]: 2025-10-10 10:11:02.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:02 np0005479822 nova_compute[235132]: 2025-10-10 10:11:02.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:04 np0005479822 nova_compute[235132]: 2025-10-10 10:11:04.640 2 INFO nova.compute.manager [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Get console output#033[00m
Oct 10 06:11:04 np0005479822 nova_compute[235132]: 2025-10-10 10:11:04.646 2 INFO oslo.privsep.daemon [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp44lnzc9u/privsep.sock']#033[00m
Oct 10 06:11:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.365 2 INFO oslo.privsep.daemon [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.233 631 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.239 631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.243 631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.244 631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 631#033[00m
Oct 10 06:11:05 np0005479822 nova_compute[235132]: 2025-10-10 10:11:05.476 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 10 06:11:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:05 np0005479822 podman[239162]: 2025-10-10 10:11:05.969303367 +0000 UTC m=+0.075681981 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:11:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:07 np0005479822 nova_compute[235132]: 2025-10-10 10:11:07.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:07 np0005479822 nova_compute[235132]: 2025-10-10 10:11:07.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:12.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:12 np0005479822 nova_compute[235132]: 2025-10-10 10:11:12.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:12 np0005479822 nova_compute[235132]: 2025-10-10 10:11:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:17 np0005479822 nova_compute[235132]: 2025-10-10 10:11:17.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:17 np0005479822 nova_compute[235132]: 2025-10-10 10:11:17.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:17.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:18 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:11:18 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:11:18 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:11:18 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:11:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.255656) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079255706, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 880, "num_deletes": 251, "total_data_size": 1686469, "memory_usage": 1712720, "flush_reason": "Manual Compaction"}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079264770, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1112996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25114, "largest_seqno": 25989, "table_properties": {"data_size": 1108996, "index_size": 1716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9395, "raw_average_key_size": 19, "raw_value_size": 1100701, "raw_average_value_size": 2307, "num_data_blocks": 77, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091020, "oldest_key_time": 1760091020, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9146 microseconds, and 3977 cpu microseconds.
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.264805) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1112996 bytes OK
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.264824) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265897) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265909) EVENT_LOG_v1 {"time_micros": 1760091079265905, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1681942, prev total WAL file size 1681942, number of live WAL files 2.
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.266522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1086KB)], [48(12MB)]
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079266596, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14536242, "oldest_snapshot_seqno": -1}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5369 keys, 12453499 bytes, temperature: kUnknown
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079317678, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12453499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12418465, "index_size": 20524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137899, "raw_average_key_size": 25, "raw_value_size": 12321874, "raw_average_value_size": 2295, "num_data_blocks": 834, "num_entries": 5369, "num_filter_entries": 5369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.318124) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12453499 bytes
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.319554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 283.7 rd, 243.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.8 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(24.2) write-amplify(11.2) OK, records in: 5887, records dropped: 518 output_compression: NoCompression
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.319586) EVENT_LOG_v1 {"time_micros": 1760091079319570, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51244, "compaction_time_cpu_micros": 30794, "output_level": 6, "num_output_files": 1, "total_output_size": 12453499, "num_input_records": 5887, "num_output_records": 5369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079320113, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079325030, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.266401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:11:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:19.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:19 np0005479822 podman[239297]: 2025-10-10 10:11:19.98524545 +0000 UTC m=+0.086094716 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 06:11:20 np0005479822 podman[239298]: 2025-10-10 10:11:20.013975835 +0000 UTC m=+0.115283813 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 06:11:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:21.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:22 np0005479822 nova_compute[235132]: 2025-10-10 10:11:22.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:22 np0005479822 podman[239362]: 2025-10-10 10:11:22.609535279 +0000 UTC m=+0.119728835 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 10 06:11:22 np0005479822 nova_compute[235132]: 2025-10-10 10:11:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:22 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:22.751 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:11:22 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:22.753 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:11:22 np0005479822 nova_compute[235132]: 2025-10-10 10:11:22.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340002370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:23 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:11:23 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:11:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340002370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:11:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:11:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:26.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:27 np0005479822 nova_compute[235132]: 2025-10-10 10:11:27.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:27 np0005479822 nova_compute[235132]: 2025-10-10 10:11:27.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101127 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:11:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:29.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:31 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:31.755 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:11:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:31.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:32.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:32 np0005479822 nova_compute[235132]: 2025-10-10 10:11:32.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:32 np0005479822 nova_compute[235132]: 2025-10-10 10:11:32.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:34.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:35.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:37 np0005479822 podman[239398]: 2025-10-10 10:11:37.006462999 +0000 UTC m=+0.098440602 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:11:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:11:37 np0005479822 nova_compute[235132]: 2025-10-10 10:11:37.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:37 np0005479822 nova_compute[235132]: 2025-10-10 10:11:37.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:37.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:39.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:11:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:11:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:41 np0005479822 nova_compute[235132]: 2025-10-10 10:11:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:11:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:11:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:11:42 np0005479822 nova_compute[235132]: 2025-10-10 10:11:42.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:42 np0005479822 nova_compute[235132]: 2025-10-10 10:11:42.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:11:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:47 np0005479822 nova_compute[235132]: 2025-10-10 10:11:47.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:47 np0005479822 nova_compute[235132]: 2025-10-10 10:11:47.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:48 np0005479822 nova_compute[235132]: 2025-10-10 10:11:48.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:48 np0005479822 nova_compute[235132]: 2025-10-10 10:11:48.047 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101149 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:11:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:49.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:50 np0005479822 nova_compute[235132]: 2025-10-10 10:11:50.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:50 np0005479822 nova_compute[235132]: 2025-10-10 10:11:50.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:50 np0005479822 nova_compute[235132]: 2025-10-10 10:11:50.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:11:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:50.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:51 np0005479822 podman[239451]: 2025-10-10 10:11:51.006109148 +0000 UTC m=+0.094382031 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 06:11:51 np0005479822 podman[239452]: 2025-10-10 10:11:51.009617854 +0000 UTC m=+0.096949252 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.193 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.194 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.195 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:11:51 np0005479822 nova_compute[235132]: 2025-10-10 10:11:51.195 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:11:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:51.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:52.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:52 np0005479822 nova_compute[235132]: 2025-10-10 10:11:52.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:52 np0005479822 nova_compute[235132]: 2025-10-10 10:11:52.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:52 np0005479822 podman[239494]: 2025-10-10 10:11:52.961798917 +0000 UTC m=+0.076073510 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 10 06:11:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.239 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.253 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.254 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.254 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.279 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.280 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:11:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:11:53 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/846796813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:11:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.771 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.846 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:11:53 np0005479822 nova_compute[235132]: 2025-10-10 10:11:53.847 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:11:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:53.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.020 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.021 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4725MB free_disk=59.89714813232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.022 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.022 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance b8379f65-91e0-45a5-a245-a1bc27260f20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.086 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.125 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:11:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:11:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3776578443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.649 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.658 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.680 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.683 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:11:54 np0005479822 nova_compute[235132]: 2025-10-10 10:11:54.683 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:11:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:55 np0005479822 nova_compute[235132]: 2025-10-10 10:11:55.679 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:11:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:55.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:11:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:56.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:57 np0005479822 nova_compute[235132]: 2025-10-10 10:11:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:11:57 np0005479822 nova_compute[235132]: 2025-10-10 10:11:57.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:57.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:11:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:11:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:59 np0005479822 ovn_controller[131749]: 2025-10-10T10:11:59Z|00034|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 06:11:59 np0005479822 nova_compute[235132]: 2025-10-10 10:11:59.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:11:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:11:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:11:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:11:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.154 2 DEBUG nova.compute.manager [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.155 2 DEBUG nova.compute.manager [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.155 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.156 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.156 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:12:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:00.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.229 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.231 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.233 2 INFO nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Terminating instance#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.234 2 DEBUG nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 06:12:00 np0005479822 kernel: tap3281ffe2-3f (unregistering): left promiscuous mode
Oct 10 06:12:00 np0005479822 NetworkManager[44982]: <info>  [1760091120.3023] device (tap3281ffe2-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:00Z|00035|binding|INFO|Releasing lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae from this chassis (sb_readonly=0)
Oct 10 06:12:00 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:00Z|00036|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae down in Southbound
Oct 10 06:12:00 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:00Z|00037|binding|INFO|Removing iface tap3281ffe2-3f ovn-installed in OVS
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.328 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:9d 10.100.0.6'], port_security=['fa:16:3e:f6:78:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8379f65-91e0-45a5-a245-a1bc27260f20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9213b2d5-68f1-49a1-a3cf-ea56345963fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf25de6-ad2e-407a-bd52-f4f32badc3ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.330 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae in datapath bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 unbound from our chassis#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.332 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.334 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c69455-09d6-4943-b9b2-80660c53cb71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.337 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 namespace which is not needed anymore#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 10 06:12:00 np0005479822 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.900s CPU time.
Oct 10 06:12:00 np0005479822 systemd-machined[191637]: Machine qemu-1-instance-00000001 terminated.
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.480 2 INFO nova.virt.libvirt.driver [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance destroyed successfully.#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.480 2 DEBUG nova.objects.instance [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.496 2 DEBUG nova.virt.libvirt.vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:10:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:10:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.496 2 DEBUG nova.network.os_vif_util [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.497 2 DEBUG nova.network.os_vif_util [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.497 2 DEBUG os_vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3281ffe2-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : haproxy version is 2.8.14-c23fe91
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : path to executable is /usr/sbin/haproxy
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : Exiting Master process...
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : Exiting Master process...
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [ALERT]    (239060) : Current worker (239062) exited with code 143 (Terminated)
Oct 10 06:12:00 np0005479822 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : All workers exited. Exiting... (0)
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.562 2 INFO os_vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f')#033[00m
Oct 10 06:12:00 np0005479822 systemd[1]: libpod-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope: Deactivated successfully.
Oct 10 06:12:00 np0005479822 podman[239619]: 2025-10-10 10:12:00.570043032 +0000 UTC m=+0.110396680 container died 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:12:00 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485-userdata-shm.mount: Deactivated successfully.
Oct 10 06:12:00 np0005479822 systemd[1]: var-lib-containers-storage-overlay-9d9cdc93c5c72ed4f0d25c08523e3110ee9304b40ab9c09ff8991fb32bfd66f7-merged.mount: Deactivated successfully.
Oct 10 06:12:00 np0005479822 podman[239619]: 2025-10-10 10:12:00.61386426 +0000 UTC m=+0.154217868 container cleanup 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 06:12:00 np0005479822 systemd[1]: libpod-conmon-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope: Deactivated successfully.
Oct 10 06:12:00 np0005479822 podman[239673]: 2025-10-10 10:12:00.702458082 +0000 UTC m=+0.062196831 container remove 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.713 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7f726503-2a00-46a1-b19f-fa623f41c72f]: (4, ('Fri Oct 10 10:12:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 (938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485)\n938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485\nFri Oct 10 10:12:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 (938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485)\n938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.716 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[68dcf478-702f-4d2c-9ced-e26d788cc1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.718 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8bfbd1-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 kernel: tapbc8bfbd1-b0: left promiscuous mode
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.750 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[563d056a-97b2-416b-a0c0-5e520923c45e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.777 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7eebd6-271d-4338-bfc5-d3376a8086e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.779 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7a4495-ea16-471f-95fd-0c567c93f132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.800 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[58162ba0-99a6-47e9-9556-5680e1a484eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396289, 'reachable_time': 36370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239694, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 systemd[1]: run-netns-ovnmeta\x2dbc8bfbd1\x2db5ac\x2d42d3\x2db24d\x2dbaf38dabaf11.mount: Deactivated successfully.
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.825 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:12:00 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.825 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[77192183-6022-4e3c-a832-ab8147580542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.958 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.959 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.995 2 INFO nova.virt.libvirt.driver [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deleting instance files /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20_del#033[00m
Oct 10 06:12:00 np0005479822 nova_compute[235132]: 2025-10-10 10:12:00.996 2 INFO nova.virt.libvirt.driver [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deletion of /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20_del complete#033[00m
Oct 10 06:12:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.061 2 DEBUG nova.virt.libvirt.host [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.062 2 INFO nova.virt.libvirt.host [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] UEFI support detected#033[00m
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.064 2 INFO nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.065 2 DEBUG oslo.service.loopingcall [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.065 2 DEBUG nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.066 2 DEBUG nova.network.neutron [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 06:12:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:01.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:01 np0005479822 nova_compute[235132]: 2025-10-10 10:12:01.985 2 DEBUG nova.network.neutron [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.006 2 INFO nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.078 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.080 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.106 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.107 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.134 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.146 2 DEBUG oslo_concurrency.processutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:02.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.233 2 DEBUG nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-deleted-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.234 2 INFO nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Neutron deleted interface 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae; detaching it from the instance and deleting it from the info cache#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.234 2 DEBUG nova.network.neutron [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.262 2 DEBUG nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Detach interface failed, port_id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae, reason: Instance b8379f65-91e0-45a5-a245-a1bc27260f20 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct 10 06:12:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:12:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2472464452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.665 2 DEBUG oslo_concurrency.processutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.670 2 DEBUG nova.compute.provider_tree [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.685 2 DEBUG nova.scheduler.client.report [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.706 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.746 2 INFO nova.scheduler.client.report [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance b8379f65-91e0-45a5-a245-a1bc27260f20#033[00m
Oct 10 06:12:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:02 np0005479822 nova_compute[235132]: 2025-10-10 10:12:02.811 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.047 2 DEBUG nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.048 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:12:03 np0005479822 nova_compute[235132]: 2025-10-10 10:12:03.050 2 WARNING nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received unexpected event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:12:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:03.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:04.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:05 np0005479822 nova_compute[235132]: 2025-10-10 10:12:05.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:06 np0005479822 nova_compute[235132]: 2025-10-10 10:12:06.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:06.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:06 np0005479822 nova_compute[235132]: 2025-10-10 10:12:06.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:07 np0005479822 nova_compute[235132]: 2025-10-10 10:12:07.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:07 np0005479822 podman[239724]: 2025-10-10 10:12:07.970265 +0000 UTC m=+0.065785769 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:12:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:10 np0005479822 nova_compute[235132]: 2025-10-10 10:12:10.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:12 np0005479822 nova_compute[235132]: 2025-10-10 10:12:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:15 np0005479822 nova_compute[235132]: 2025-10-10 10:12:15.479 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091120.4778812, b8379f65-91e0-45a5-a245-a1bc27260f20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:12:15 np0005479822 nova_compute[235132]: 2025-10-10 10:12:15.479 2 INFO nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Stopped (Lifecycle Event)#033[00m
Oct 10 06:12:15 np0005479822 nova_compute[235132]: 2025-10-10 10:12:15.511 2 DEBUG nova.compute.manager [None req-d8b4fdf1-fb3d-4678-9158-ff523158894b - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:12:15 np0005479822 nova_compute[235132]: 2025-10-10 10:12:15.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:17 np0005479822 nova_compute[235132]: 2025-10-10 10:12:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:18.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:20.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:20 np0005479822 nova_compute[235132]: 2025-10-10 10:12:20.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:21 np0005479822 podman[239780]: 2025-10-10 10:12:21.896950252 +0000 UTC m=+0.089249001 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 06:12:21 np0005479822 podman[239781]: 2025-10-10 10:12:21.914412999 +0000 UTC m=+0.094999128 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:12:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:22 np0005479822 nova_compute[235132]: 2025-10-10 10:12:22.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:12:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:23.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:12:24 np0005479822 podman[239902]: 2025-10-10 10:12:24.018900437 +0000 UTC m=+0.113253537 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:12:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:12:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:12:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:12:24 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:12:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.856 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.857 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.878 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.961 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.961 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.971 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 06:12:24 np0005479822 nova_compute[235132]: 2025-10-10 10:12:24.971 2 INFO nova.compute.claims [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 10 06:12:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.080 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:25 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:12:25 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4121639621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.573 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.581 2 DEBUG nova.compute.provider_tree [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.602 2 DEBUG nova.scheduler.client.report [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.633 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.636 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.724 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.725 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.746 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.763 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 06:12:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.934 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.937 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.937 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating image(s)#033[00m
Oct 10 06:12:25 np0005479822 nova_compute[235132]: 2025-10-10 10:12:25.973 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.004 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.031 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.035 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.124 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.125 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.126 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.126 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.154 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.159 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.482 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.583 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.719 2 DEBUG nova.objects.instance [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.740 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.740 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Ensure instance console log exists: /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.741 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.742 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.742 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:26 np0005479822 nova_compute[235132]: 2025-10-10 10:12:26.746 2 DEBUG nova.policy [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:12:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:27 np0005479822 nova_compute[235132]: 2025-10-10 10:12:27.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:29 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:12:29 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:12:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:29 np0005479822 nova_compute[235132]: 2025-10-10 10:12:29.930 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully created port: eb2cd434-444d-4138-bbe8-948bf47d3986 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:12:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:30 np0005479822 nova_compute[235132]: 2025-10-10 10:12:30.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:31.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:31 np0005479822 nova_compute[235132]: 2025-10-10 10:12:31.949 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully updated port: eb2cd434-444d-4138-bbe8-948bf47d3986 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:12:31 np0005479822 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:12:31 np0005479822 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:12:31 np0005479822 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:12:32 np0005479822 nova_compute[235132]: 2025-10-10 10:12:32.083 2 DEBUG nova.compute.manager [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:32 np0005479822 nova_compute[235132]: 2025-10-10 10:12:32.084 2 DEBUG nova.compute.manager [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:12:32 np0005479822 nova_compute[235132]: 2025-10-10 10:12:32.084 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:12:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:32 np0005479822 nova_compute[235132]: 2025-10-10 10:12:32.708 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 06:12:32 np0005479822 nova_compute[235132]: 2025-10-10 10:12:32.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.430 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.455 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance network_info: |[{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.461 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start _get_guest_xml network_info=[{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.471 2 WARNING nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.480 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.482 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.487 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.487 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.488 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.489 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.490 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.490 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.491 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.491 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.492 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.492 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.493 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.493 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.494 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.494 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.499 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:33.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:33 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:12:33 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3163061778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:12:33 np0005479822 nova_compute[235132]: 2025-10-10 10:12:33.993 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.028 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.032 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:34 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:12:34 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/543711216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.478 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.480 2 DEBUG nova.virt.libvirt.vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:12:25Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.480 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.481 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.483 2 DEBUG nova.objects.instance [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.500 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] End _get_guest_xml xml=<domain type="kvm">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <name>instance-00000003</name>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <memory>131072</memory>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <vcpu>1</vcpu>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:creationTime>2025-10-10 10:12:33</nova:creationTime>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:flavor name="m1.nano">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:memory>128</nova:memory>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:disk>1</nova:disk>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:swap>0</nova:swap>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:vcpus>1</nova:vcpus>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </nova:flavor>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:owner>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </nova:owner>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <nova:ports>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        </nova:port>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </nova:ports>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </nova:instance>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <sysinfo type="smbios">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="manufacturer">RDO</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="product">OpenStack Compute</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="serial">2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="uuid">2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <entry name="family">Virtual Machine</entry>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <boot dev="hd"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <smbios mode="sysinfo"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <vmcoreinfo/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <clock offset="utc">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <timer name="hpet" present="no"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <cpu mode="host-model" match="exact">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <disk type="network" device="disk">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <target dev="vda" bus="virtio"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <disk type="network" device="cdrom">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <target dev="sda" bus="sata"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <interface type="ethernet">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <mac address="fa:16:3e:8b:9e:3d"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <mtu size="1442"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <target dev="tapeb2cd434-44"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <serial type="pty">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <log file="/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log" append="off"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <input type="tablet" bus="usb"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <rng model="virtio">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <backend model="random">/dev/urandom</backend>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <controller type="usb" index="0"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    <memballoon model="virtio">
Oct 10 06:12:34 np0005479822 nova_compute[235132]:      <stats period="10"/>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:12:34 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:12:34 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:12:34 np0005479822 nova_compute[235132]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Preparing to wait for external event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.503 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.503 2 DEBUG nova.virt.libvirt.vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:12:25Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.504 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.504 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.505 2 DEBUG os_vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb2cd434-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb2cd434-44, col_values=(('external_ids', {'iface-id': 'eb2cd434-444d-4138-bbe8-948bf47d3986', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9e:3d', 'vm-uuid': '2fe2b257-7e1f-46c2-aed9-0593c533e290'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:34 np0005479822 NetworkManager[44982]: <info>  [1760091154.5134] manager: (tapeb2cd434-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.520 2 INFO os_vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44')#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.595 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.596 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.596 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:8b:9e:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.597 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Using config drive#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.629 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.656 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.657 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.671 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:12:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.939 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating config drive at /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config#033[00m
Oct 10 06:12:34 np0005479822 nova_compute[235132]: 2025-10-10 10:12:34.944 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3xnsivg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101235 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:12:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.084 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3xnsivg" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.118 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.122 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.297 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.298 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deleting local config drive /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config because it was imported into RBD.#033[00m
Oct 10 06:12:35 np0005479822 kernel: tapeb2cd434-44: entered promiscuous mode
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.3645] manager: (tapeb2cd434-44): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 10 06:12:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:35Z|00038|binding|INFO|Claiming lport eb2cd434-444d-4138-bbe8-948bf47d3986 for this chassis.
Oct 10 06:12:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:35Z|00039|binding|INFO|eb2cd434-444d-4138-bbe8-948bf47d3986: Claiming fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.393 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9e:3d 10.100.0.6'], port_security=['fa:16:3e:8b:9e:3d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b2e1b849-99bd-43fd-883d-af1bb6750e12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b59927-b11d-4637-a561-9adc673cffb1, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=eb2cd434-444d-4138-bbe8-948bf47d3986) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.394 141156 INFO neutron.agent.ovn.metadata.agent [-] Port eb2cd434-444d-4138-bbe8-948bf47d3986 in datapath c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 bound to our chassis#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.395 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1ba46b2-7e02-4d4f-b296-3e1e1f027d22#033[00m
Oct 10 06:12:35 np0005479822 systemd-machined[191637]: New machine qemu-2-instance-00000003.
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.410 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[166c95a6-e362-4498-9832-d13a20485f48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.411 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1ba46b2-71 in ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.413 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1ba46b2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.413 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[834e2cc3-6453-43f3-8938-ddba2e9dcc1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.414 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[986f01b5-db40-4b0d-869f-d021f6cf4417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.428 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[c816487e-cf13-4c4c-adc0-5c7bb8d2714a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:35Z|00040|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 ovn-installed in OVS
Oct 10 06:12:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:35Z|00041|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 up in Southbound
Oct 10 06:12:35 np0005479822 systemd-udevd[240285]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.455 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce694e2-04b5-4d05-99b9-14cd99674019]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.4632] device (tapeb2cd434-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.4643] device (tapeb2cd434-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.491 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ca832-f657-4042-8a41-0a316cadd8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.496 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b684cba0-49e1-4e5e-98ab-7eed8fbdbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.4991] manager: (tapc1ba46b2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.538 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa65ed9-3842-40f3-9b96-1ea2481a4c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.540 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[ef06f6e4-caad-4a4b-a143-1810d84b4a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.5672] device (tapc1ba46b2-70): carrier: link connected
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.572 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[7298c66f-988f-491e-890d-1e1b846383df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.586 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4668eaae-3ab4-453f-afc9-2f1e319043aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1ba46b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:28:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406927, 'reachable_time': 16782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240315, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.598 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb501f7-d268-4b2b-933c-f984b4592923]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:28ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 406927, 'tstamp': 406927}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240316, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.618 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7dfc84-7364-4362-ba97-910cd8f1cd43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1ba46b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:28:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406927, 'reachable_time': 16782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240317, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.652 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[16a97e01-de09-4cd0-91e9-fa22e3d3fe51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.724 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[982c9f8f-33dd-480c-b1f8-6f22295513ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.726 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba46b2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.726 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.727 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ba46b2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:35 np0005479822 NetworkManager[44982]: <info>  [1760091155.7296] manager: (tapc1ba46b2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 10 06:12:35 np0005479822 kernel: tapc1ba46b2-70: entered promiscuous mode
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.733 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1ba46b2-70, col_values=(('external_ids', {'iface-id': 'ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:35Z|00042|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.736 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.736 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[085c5b08-0f4a-427d-8f07-07c64e30b819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.737 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID c1ba46b2-7e02-4d4f-b296-3e1e1f027d22
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:12:35 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.738 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'env', 'PROCESS_TAG=haproxy-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.900 2 DEBUG nova.compute.manager [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.901 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.902 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.902 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:35 np0005479822 nova_compute[235132]: 2025-10-10 10:12:35.903 2 DEBUG nova.compute.manager [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Processing event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 06:12:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:35.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:36 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:36.014 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:12:36 np0005479822 podman[240392]: 2025-10-10 10:12:36.183988548 +0000 UTC m=+0.059825547 container create a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 06:12:36 np0005479822 systemd[1]: Started libpod-conmon-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope.
Oct 10 06:12:36 np0005479822 podman[240392]: 2025-10-10 10:12:36.15664218 +0000 UTC m=+0.032479219 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:12:36 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:12:36 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f454a5f6c0eb08c56ed00e9648965604ea84ac6e2edf2652dc6afe6afb2c063/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:12:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:36 np0005479822 podman[240392]: 2025-10-10 10:12:36.299827825 +0000 UTC m=+0.175664874 container init a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 10 06:12:36 np0005479822 podman[240392]: 2025-10-10 10:12:36.309508329 +0000 UTC m=+0.185345358 container start a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 06:12:36 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : New worker (240413) forked
Oct 10 06:12:36 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : Loading success.
Oct 10 06:12:36 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:36.358 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.555 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.554304, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.555 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Started (Lifecycle Event)#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.559 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.565 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.570 2 INFO nova.virt.libvirt.driver [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance spawned successfully.#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.570 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.580 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.586 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.598 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.599 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.600 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.601 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.601 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.602 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.614 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.614 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.554571, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.615 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Paused (Lifecycle Event)#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.647 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.651 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.5638149, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.652 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Resumed (Lifecycle Event)#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.673 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.677 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.684 2 INFO nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 10.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.685 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.695 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.745 2 INFO nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 11.82 seconds to build instance.#033[00m
Oct 10 06:12:36 np0005479822 nova_compute[235132]: 2025-10-10 10:12:36.760 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.978 2 DEBUG nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.979 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:12:37 np0005479822 nova_compute[235132]: 2025-10-10 10:12:37.981 2 WARNING nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with vm_state active and task_state None.#033[00m
Oct 10 06:12:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:12:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:12:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:38 np0005479822 podman[240448]: 2025-10-10 10:12:38.960416977 +0000 UTC m=+0.063977061 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:12:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:39 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:39.361 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:12:39 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:39Z|00043|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 06:12:39 np0005479822 nova_compute[235132]: 2025-10-10 10:12:39.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:39 np0005479822 NetworkManager[44982]: <info>  [1760091159.5327] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 10 06:12:39 np0005479822 NetworkManager[44982]: <info>  [1760091159.5338] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 10 06:12:39 np0005479822 nova_compute[235132]: 2025-10-10 10:12:39.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:39 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:39Z|00044|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 06:12:39 np0005479822 nova_compute[235132]: 2025-10-10 10:12:39.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:40 np0005479822 nova_compute[235132]: 2025-10-10 10:12:40.108 2 DEBUG nova.compute.manager [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:12:40 np0005479822 nova_compute[235132]: 2025-10-10 10:12:40.109 2 DEBUG nova.compute.manager [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:12:40 np0005479822 nova_compute[235132]: 2025-10-10 10:12:40.109 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:12:40 np0005479822 nova_compute[235132]: 2025-10-10 10:12:40.110 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:12:40 np0005479822 nova_compute[235132]: 2025-10-10 10:12:40.110 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:12:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:40.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:41.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:42 np0005479822 nova_compute[235132]: 2025-10-10 10:12:42.145 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:12:42 np0005479822 nova_compute[235132]: 2025-10-10 10:12:42.146 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:42 np0005479822 nova_compute[235132]: 2025-10-10 10:12:42.165 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:12:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.208 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:42.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:42 np0005479822 nova_compute[235132]: 2025-10-10 10:12:42.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:43.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:12:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:44 np0005479822 nova_compute[235132]: 2025-10-10 10:12:44.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:46.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:12:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:12:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:47 np0005479822 nova_compute[235132]: 2025-10-10 10:12:47.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:47.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:48 np0005479822 nova_compute[235132]: 2025-10-10 10:12:48.063 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:49 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:49Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 06:12:49 np0005479822 ovn_controller[131749]: 2025-10-10T10:12:49Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 06:12:49 np0005479822 nova_compute[235132]: 2025-10-10 10:12:49.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:50 np0005479822 nova_compute[235132]: 2025-10-10 10:12:50.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:12:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:12:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:12:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:51 np0005479822 nova_compute[235132]: 2025-10-10 10:12:51.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:51 np0005479822 nova_compute[235132]: 2025-10-10 10:12:51.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:12:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:51.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:12:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.855 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:12:52 np0005479822 nova_compute[235132]: 2025-10-10 10:12:52.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:52 np0005479822 podman[240481]: 2025-10-10 10:12:52.970633193 +0000 UTC m=+0.067079385 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 06:12:52 np0005479822 podman[240480]: 2025-10-10 10:12:52.993539769 +0000 UTC m=+0.088909722 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:12:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.939 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.967 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.967 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:12:53 np0005479822 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:12:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031806664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.445 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.518 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.519 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.550 2 INFO nova.compute.manager [None req-52db11d7-5279-4395-931e-77d5220cbede 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Get console output#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.555 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 10 06:12:54 np0005479822 podman[240546]: 2025-10-10 10:12:54.625017486 +0000 UTC m=+0.124204217 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.754 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.755 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4734MB free_disk=59.94288635253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.755 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.756 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:12:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:12:54 np0005479822 nova_compute[235132]: 2025-10-10 10:12:54.882 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:12:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:12:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2515670112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.367 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.373 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.392 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.418 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.419 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:12:55 np0005479822 nova_compute[235132]: 2025-10-10 10:12:55.494 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:12:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101257 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:12:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:12:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:57 np0005479822 nova_compute[235132]: 2025-10-10 10:12:57.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:57.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:12:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:59 np0005479822 nova_compute[235132]: 2025-10-10 10:12:59.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:12:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:12:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:12:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:12:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:13:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:13:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:02 np0005479822 nova_compute[235132]: 2025-10-10 10:13:02.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:02 np0005479822 nova_compute[235132]: 2025-10-10 10:13:02.955 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:02 np0005479822 nova_compute[235132]: 2025-10-10 10:13:02.956 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:02 np0005479822 nova_compute[235132]: 2025-10-10 10:13:02.957 2 DEBUG nova.objects.instance [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:03 np0005479822 nova_compute[235132]: 2025-10-10 10:13:03.751 2 DEBUG nova.objects.instance [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:03 np0005479822 nova_compute[235132]: 2025-10-10 10:13:03.772 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:13:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:03.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:03 np0005479822 nova_compute[235132]: 2025-10-10 10:13:03.988 2 DEBUG nova.policy [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:13:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:04 np0005479822 nova_compute[235132]: 2025-10-10 10:13:04.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:04 np0005479822 nova_compute[235132]: 2025-10-10 10:13:04.876 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully created port: 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:13:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.736 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully updated port: 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.759 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.760 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.760 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:13:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.864 2 DEBUG nova.compute.manager [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.865 2 DEBUG nova.compute.manager [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-9ea527cd-71d7-4979-bef2-4cbe7f0038cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:13:05 np0005479822 nova_compute[235132]: 2025-10-10 10:13:05.865 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.766 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.788 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.789 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.790 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.795 2 DEBUG nova.virt.libvirt.vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.796 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.798 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.799 2 DEBUG os_vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ea527cd-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ea527cd-71, col_values=(('external_ids', {'iface-id': '9ea527cd-71d7-4979-bef2-4cbe7f0038cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:d2:11', 'vm-uuid': '2fe2b257-7e1f-46c2-aed9-0593c533e290'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 NetworkManager[44982]: <info>  [1760091187.8139] manager: (tap9ea527cd-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 10 06:13:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.825 2 INFO os_vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.826 2 DEBUG nova.virt.libvirt.vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.826 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.827 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.830 2 DEBUG nova.virt.libvirt.guest [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] attach device xml: <interface type="ethernet">
Oct 10 06:13:07 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:33:d2:11"/>
Oct 10 06:13:07 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:13:07 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:13:07 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:13:07 np0005479822 nova_compute[235132]:  <target dev="tap9ea527cd-71"/>
Oct 10 06:13:07 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:13:07 np0005479822 nova_compute[235132]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 kernel: tap9ea527cd-71: entered promiscuous mode
Oct 10 06:13:07 np0005479822 NetworkManager[44982]: <info>  [1760091187.8467] manager: (tap9ea527cd-71): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 10 06:13:07 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:07Z|00045|binding|INFO|Claiming lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf for this chassis.
Oct 10 06:13:07 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:07Z|00046|binding|INFO|9ea527cd-71d7-4979-bef2-4cbe7f0038cf: Claiming fa:16:3e:33:d2:11 10.100.0.19
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.863 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:d2:11 10.100.0.19'], port_security=['fa:16:3e:33:d2:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8d7aa34-fd4e-44cc-8eaa-a67a270b663f, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=9ea527cd-71d7-4979-bef2-4cbe7f0038cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.864 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf in datapath 2d451f14-1551-484b-9a8f-b854ec5a8acc bound to our chassis#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.865 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d451f14-1551-484b-9a8f-b854ec5a8acc#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.881 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[500dd0b6-b97d-41cb-9946-f422a37c11b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.882 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d451f14-11 in ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.884 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d451f14-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.884 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c24bdabe-03e3-41db-b609-f6452bcc41e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.885 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0c0bc9-40fa-45bb-bde0-fe6c421088f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 systemd-udevd[240639]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:07Z|00047|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf ovn-installed in OVS
Oct 10 06:13:07 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:07Z|00048|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf up in Southbound
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.907 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[20842d0c-17f1-4e5e-a7d2-60e8fa07b64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 NetworkManager[44982]: <info>  [1760091187.9184] device (tap9ea527cd-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:13:07 np0005479822 NetworkManager[44982]: <info>  [1760091187.9193] device (tap9ea527cd-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.929 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[75e2fa74-47e6-473c-aaa6-6399643886ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.958 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.958 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[fb52d022-2eb1-4e6c-9ce7-0236c7bb36b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.960 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.960 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:8b:9e:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:13:07 np0005479822 nova_compute[235132]: 2025-10-10 10:13:07.961 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:33:d2:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:13:07 np0005479822 NetworkManager[44982]: <info>  [1760091187.9647] manager: (tap2d451f14-10): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 10 06:13:07 np0005479822 systemd-udevd[240643]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.964 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[55890e9c-840f-45fe-a68c-ad6a8aaba63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.991 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[1342a07c-0793-41c1-b650-5939b64fd6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:07 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.994 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[709c7541-6099-42e0-bf44-9aea060d3b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.007 2 DEBUG nova.virt.libvirt.guest [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:08 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 06:13:08 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:08 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:08 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:08 np0005479822 nova_compute[235132]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 10 06:13:08 np0005479822 NetworkManager[44982]: <info>  [1760091188.0106] device (tap2d451f14-10): carrier: link connected
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.014 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4be0b9-2f30-4afa-bbef-f90839a4901a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.036 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[48b563c1-1d05-4842-b033-1d8e4c65c76f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d451f14-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:ce:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410172, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240665, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.046 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.052 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[84bb6b73-68a6-4356-924c-bcac4e3026c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:ce51'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410172, 'tstamp': 410172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240666, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.071 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5163dee8-72bf-4316-8eb7-af3865d8264a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d451f14-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:ce:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410172, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240667, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.097 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[30bf8c62-b034-41ec-b2df-a691e4cdbdf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.153 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[05cfe489-a543-4bab-a2f7-a471f0cee972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d451f14-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d451f14-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:08 np0005479822 NetworkManager[44982]: <info>  [1760091188.1578] manager: (tap2d451f14-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 10 06:13:08 np0005479822 kernel: tap2d451f14-10: entered promiscuous mode
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.160 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d451f14-10, col_values=(('external_ids', {'iface-id': '3bbca16e-9180-468e-a8f6-96640db7dad5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:08 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:08Z|00049|binding|INFO|Releasing lport 3bbca16e-9180-468e-a8f6-96640db7dad5 from this chassis (sb_readonly=0)
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.176 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.177 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f2031644-88ea-49e7-aceb-2d895c95992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.178 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-2d451f14-1551-484b-9a8f-b854ec5a8acc
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID 2d451f14-1551-484b-9a8f-b854ec5a8acc
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:13:08 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.179 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'env', 'PROCESS_TAG=haproxy-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d451f14-1551-484b-9a8f-b854ec5a8acc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.202 2 DEBUG nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.202 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.203 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.203 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.204 2 DEBUG nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:08 np0005479822 nova_compute[235132]: 2025-10-10 10:13:08.204 2 WARNING nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.#033[00m
Oct 10 06:13:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:08 np0005479822 podman[240699]: 2025-10-10 10:13:08.679814992 +0000 UTC m=+0.092043527 container create a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 06:13:08 np0005479822 systemd[1]: Started libpod-conmon-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope.
Oct 10 06:13:08 np0005479822 podman[240699]: 2025-10-10 10:13:08.636489198 +0000 UTC m=+0.048717773 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:13:08 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:13:08 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e424e3f4e3815f244fa79fcc7f0f5daf62663db4a716ad6c422fb36d7b3a0dc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:13:08 np0005479822 podman[240699]: 2025-10-10 10:13:08.769275108 +0000 UTC m=+0.181503693 container init a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:13:08 np0005479822 podman[240699]: 2025-10-10 10:13:08.779912409 +0000 UTC m=+0.192140944 container start a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:13:08 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : New worker (240720) forked
Oct 10 06:13:08 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : Loading success.
Oct 10 06:13:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:09 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:d2:11 10.100.0.19
Oct 10 06:13:09 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:d2:11 10.100.0.19
Oct 10 06:13:09 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:09 np0005479822 podman[240730]: 2025-10-10 10:13:09.966019348 +0000 UTC m=+0.064335860 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.117 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.117 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.138 2 DEBUG nova.objects.instance [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.172 2 DEBUG nova.virt.libvirt.vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.173 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.174 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.179 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.182 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.186 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Attempting to detach device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.187 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:33:d2:11"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <target dev="tap9ea527cd-71"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.196 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.201 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <name>instance-00000003</name>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='tapeb2cd434-44'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:33:d2:11'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='tap9ea527cd-71'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='net1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.202 2 INFO nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the persistent domain config.#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.203 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] (1/8): Attempting to detach device tap9ea527cd-71 with device alias net1 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.203 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:33:d2:11"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <target dev="tap9ea527cd-71"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 06:13:10 np0005479822 kernel: tap9ea527cd-71 (unregistering): left promiscuous mode
Oct 10 06:13:10 np0005479822 NetworkManager[44982]: <info>  [1760091190.3161] device (tap9ea527cd-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.323 2 DEBUG nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.323 2 WARNING nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.#033[00m
Oct 10 06:13:10 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:10Z|00050|binding|INFO|Releasing lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf from this chassis (sb_readonly=0)
Oct 10 06:13:10 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:10Z|00051|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf down in Southbound
Oct 10 06:13:10 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:10Z|00052|binding|INFO|Removing iface tap9ea527cd-71 ovn-installed in OVS
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.333 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:d2:11 10.100.0.19'], port_security=['fa:16:3e:33:d2:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8d7aa34-fd4e-44cc-8eaa-a67a270b663f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=9ea527cd-71d7-4979-bef2-4cbe7f0038cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf in datapath 2d451f14-1551-484b-9a8f-b854ec5a8acc unbound from our chassis#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.336 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d451f14-1551-484b-9a8f-b854ec5a8acc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.337 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[775944dd-957f-4f40-b936-569a223cedf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.337 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc namespace which is not needed anymore#033[00m
Oct 10 06:13:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:13:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.343 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Received event <DeviceRemovedEvent: 1760091190.3428771, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.345 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Start waiting for the detach event from libvirt for device tap9ea527cd-71 with device alias net1 for instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.346 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.353 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <name>instance-00000003</name>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target dev='tapeb2cd434-44'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.353 2 INFO nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the live domain config.#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.354 2 DEBUG nova.virt.libvirt.vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.354 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.355 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.355 2 DEBUG os_vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ea527cd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.364 2 INFO os_vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.365 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:10 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:10 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:10 np0005479822 nova_compute[235132]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 10 06:13:10 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : haproxy version is 2.8.14-c23fe91
Oct 10 06:13:10 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : path to executable is /usr/sbin/haproxy
Oct 10 06:13:10 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [WARNING]  (240718) : Exiting Master process...
Oct 10 06:13:10 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [ALERT]    (240718) : Current worker (240720) exited with code 143 (Terminated)
Oct 10 06:13:10 np0005479822 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [WARNING]  (240718) : All workers exited. Exiting... (0)
Oct 10 06:13:10 np0005479822 systemd[1]: libpod-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope: Deactivated successfully.
Oct 10 06:13:10 np0005479822 podman[240769]: 2025-10-10 10:13:10.495142863 +0000 UTC m=+0.053065041 container died a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:13:10 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc-userdata-shm.mount: Deactivated successfully.
Oct 10 06:13:10 np0005479822 systemd[1]: var-lib-containers-storage-overlay-e424e3f4e3815f244fa79fcc7f0f5daf62663db4a716ad6c422fb36d7b3a0dc9-merged.mount: Deactivated successfully.
Oct 10 06:13:10 np0005479822 podman[240769]: 2025-10-10 10:13:10.547545976 +0000 UTC m=+0.105468174 container cleanup a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 06:13:10 np0005479822 systemd[1]: libpod-conmon-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope: Deactivated successfully.
Oct 10 06:13:10 np0005479822 podman[240800]: 2025-10-10 10:13:10.613711056 +0000 UTC m=+0.043739148 container remove a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.622 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[70d36e82-5601-4705-a542-4dbe5758d928]: (4, ('Fri Oct 10 10:13:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc (a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc)\na51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc\nFri Oct 10 10:13:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc (a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc)\na51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.626 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1a677852-acb5-4723-a518-1decca262504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.627 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d451f14-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 kernel: tap2d451f14-10: left promiscuous mode
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.661 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7d83854b-d2d8-45b1-bc6b-aaff098cdf10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.694 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[00213736-2cf2-4eff-a9af-2fb29cdf476a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.696 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed6c32b-c173-4c2c-882c-a0c63c85dcfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.715 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed78d5e-9caa-4eb4-b35d-0ffb6c005620]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410166, 'reachable_time': 42124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240815, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 systemd[1]: run-netns-ovnmeta\x2d2d451f14\x2d1551\x2d484b\x2d9a8f\x2db854ec5a8acc.mount: Deactivated successfully.
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.721 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:13:10 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.721 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[51f09669-537b-4a22-a3fb-269f2aa8409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.813 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.814 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:10 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:10 np0005479822 nova_compute[235132]: 2025-10-10 10:13:10.843 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101311 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:13:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:11 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.695 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.696 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.696 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.697 2 WARNING nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.699 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.699 2 WARNING nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.761 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.762 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.762 2 DEBUG nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.830 2 DEBUG nova.compute.manager [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-deleted-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.830 2 INFO nova.compute.manager [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Neutron deleted interface 9ea527cd-71d7-4979-bef2-4cbe7f0038cf; detaching it from the instance and deleting it from the info cache#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.831 2 DEBUG nova.network.neutron [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:12 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.857 2 DEBUG nova.objects.instance [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.910 2 DEBUG nova.objects.instance [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.948 2 DEBUG nova.virt.libvirt.vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.949 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.950 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.954 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.957 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <name>instance-00000003</name>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='tapeb2cd434-44'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.958 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.965 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <name>instance-00000003</name>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target dev='tapeb2cd434-44'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.966 2 WARNING nova.virt.libvirt.driver [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Detaching interface fa:16:3e:33:d2:11 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9ea527cd-71' not found.#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.966 2 DEBUG nova.virt.libvirt.vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.967 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.968 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.968 2 DEBUG os_vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ea527cd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.973 2 INFO os_vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')#033[00m
Oct 10 06:13:12 np0005479822 nova_compute[235132]: 2025-10-10 10:13:12.974 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:13:12</nova:creationTime>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 06:13:12 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:13:12 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:13:12 np0005479822 nova_compute[235132]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 10 06:13:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:14 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:14Z|00053|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 06:13:14 np0005479822 nova_compute[235132]: 2025-10-10 10:13:14.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:14 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:14 np0005479822 nova_compute[235132]: 2025-10-10 10:13:14.905 2 INFO nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 10 06:13:14 np0005479822 nova_compute[235132]: 2025-10-10 10:13:14.906 2 DEBUG nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:14 np0005479822 nova_compute[235132]: 2025-10-10 10:13:14.936 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:14 np0005479822 nova_compute[235132]: 2025-10-10 10:13:14.963 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.373 2 DEBUG nova.compute.manager [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.373 2 DEBUG nova.compute.manager [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.374 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.374 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.375 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.454 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.456 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.458 2 INFO nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Terminating instance#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.460 2 DEBUG nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 06:13:15 np0005479822 kernel: tapeb2cd434-44 (unregistering): left promiscuous mode
Oct 10 06:13:15 np0005479822 NetworkManager[44982]: <info>  [1760091195.5278] device (tapeb2cd434-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:15Z|00054|binding|INFO|Releasing lport eb2cd434-444d-4138-bbe8-948bf47d3986 from this chassis (sb_readonly=0)
Oct 10 06:13:15 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:15Z|00055|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 down in Southbound
Oct 10 06:13:15 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:15Z|00056|binding|INFO|Removing iface tapeb2cd434-44 ovn-installed in OVS
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.555 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9e:3d 10.100.0.6'], port_security=['fa:16:3e:8b:9e:3d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b2e1b849-99bd-43fd-883d-af1bb6750e12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b59927-b11d-4637-a561-9adc673cffb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=eb2cd434-444d-4138-bbe8-948bf47d3986) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.557 141156 INFO neutron.agent.ovn.metadata.agent [-] Port eb2cd434-444d-4138-bbe8-948bf47d3986 in datapath c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 unbound from our chassis#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.559 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.560 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[23638119-d9f1-4d5f-a558-5f3e0318d326]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.561 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 namespace which is not needed anymore#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 10 06:13:15 np0005479822 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 15.487s CPU time.
Oct 10 06:13:15 np0005479822 systemd-machined[191637]: Machine qemu-2-instance-00000003 terminated.
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.700 2 INFO nova.virt.libvirt.driver [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance destroyed successfully.#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.701 2 DEBUG nova.objects.instance [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : haproxy version is 2.8.14-c23fe91
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : path to executable is /usr/sbin/haproxy
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : Exiting Master process...
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : Exiting Master process...
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [ALERT]    (240411) : Current worker (240413) exited with code 143 (Terminated)
Oct 10 06:13:15 np0005479822 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : All workers exited. Exiting... (0)
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.731 2 DEBUG nova.virt.libvirt.vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.732 2 DEBUG nova.network.os_vif_util [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.733 2 DEBUG nova.network.os_vif_util [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.733 2 DEBUG os_vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 systemd[1]: libpod-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope: Deactivated successfully.
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2cd434-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:15 np0005479822 podman[240848]: 2025-10-10 10:13:15.739672314 +0000 UTC m=+0.061152424 container died a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.748 2 INFO os_vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44')#033[00m
Oct 10 06:13:15 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026-userdata-shm.mount: Deactivated successfully.
Oct 10 06:13:15 np0005479822 systemd[1]: var-lib-containers-storage-overlay-4f454a5f6c0eb08c56ed00e9648965604ea84ac6e2edf2652dc6afe6afb2c063-merged.mount: Deactivated successfully.
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 06:13:15 np0005479822 podman[240848]: 2025-10-10 10:13:15.804803524 +0000 UTC m=+0.126283624 container cleanup a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 06:13:15 np0005479822 systemd[1]: libpod-conmon-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope: Deactivated successfully.
Oct 10 06:13:15 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:15 np0005479822 podman[240907]: 2025-10-10 10:13:15.873749188 +0000 UTC m=+0.045971487 container remove a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.883 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f84347cc-bb9a-442f-b00e-96dbee02b206]: (4, ('Fri Oct 10 10:13:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 (a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026)\na01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026\nFri Oct 10 10:13:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 (a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026)\na01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.885 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c638ee6-3b17-4c86-8567-715205bbd89c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.887 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba46b2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:15 np0005479822 kernel: tapc1ba46b2-70: left promiscuous mode
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 nova_compute[235132]: 2025-10-10 10:13:15.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.924 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[63f00ea2-416c-4ccf-9fd0-34e9a26e5a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.956 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7b3f55-87e9-435b-9183-d7472fa2262c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.957 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7af6bacc-b7c5-4fce-9573-ca2fae4abfd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.984 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e44fa1-1b16-4511-91f9-f1bffa208ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406919, 'reachable_time': 21764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240921, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:15 np0005479822 systemd[1]: run-netns-ovnmeta\x2dc1ba46b2\x2d7e02\x2d4d4f\x2db296\x2d3e1e1f027d22.mount: Deactivated successfully.
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.991 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:13:15 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.992 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[f41825d3-8943-4f90-9964-21cd4617d813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.230 2 INFO nova.virt.libvirt.driver [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deleting instance files /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290_del#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.231 2 INFO nova.virt.libvirt.driver [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deletion of /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290_del complete#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.293 2 INFO nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.294 2 DEBUG oslo.service.loopingcall [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.294 2 DEBUG nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 06:13:16 np0005479822 nova_compute[235132]: 2025-10-10 10:13:16.295 2 DEBUG nova.network.neutron [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 06:13:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:16 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.008 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.009 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.036 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.211 2 DEBUG nova.network.neutron [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.229 2 INFO nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.282 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.282 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.401 2 DEBUG oslo_concurrency.processutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:17 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:13:17 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2423340283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.925 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.926 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.926 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.927 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.927 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.927 2 WARNING nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.928 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-deleted-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.929 2 DEBUG oslo_concurrency.processutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.936 2 DEBUG nova.compute.provider_tree [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:13:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:17.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.960 2 DEBUG nova.scheduler.client.report [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:13:17 np0005479822 nova_compute[235132]: 2025-10-10 10:13:17.993 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:18 np0005479822 nova_compute[235132]: 2025-10-10 10:13:18.028 2 INFO nova.scheduler.client.report [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 2fe2b257-7e1f-46c2-aed9-0593c533e290#033[00m
Oct 10 06:13:18 np0005479822 nova_compute[235132]: 2025-10-10 10:13:18.100 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001a50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:19 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:13:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:20 np0005479822 nova_compute[235132]: 2025-10-10 10:13:20.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:20 np0005479822 nova_compute[235132]: 2025-10-10 10:13:20.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:20 np0005479822 nova_compute[235132]: 2025-10-10 10:13:20.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:21 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:21.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:22 np0005479822 nova_compute[235132]: 2025-10-10 10:13:22.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:22 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:13:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:13:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:23 np0005479822 podman[240978]: 2025-10-10 10:13:23.963439293 +0000 UTC m=+0.062216901 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:13:23 np0005479822 podman[240977]: 2025-10-10 10:13:23.964531374 +0000 UTC m=+0.065183944 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 06:13:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:24 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:25 np0005479822 podman[241015]: 2025-10-10 10:13:25.016757582 +0000 UTC m=+0.119703664 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 06:13:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:25 np0005479822 nova_compute[235132]: 2025-10-10 10:13:25.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:25 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 06:13:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:27 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:27 np0005479822 nova_compute[235132]: 2025-10-10 10:13:27.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:29 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:13:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:13:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:13:30 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:13:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:30 np0005479822 nova_compute[235132]: 2025-10-10 10:13:30.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091195.6956131, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:13:30 np0005479822 nova_compute[235132]: 2025-10-10 10:13:30.698 2 INFO nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Stopped (Lifecycle Event)#033[00m
Oct 10 06:13:30 np0005479822 nova_compute[235132]: 2025-10-10 10:13:30.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:30 np0005479822 nova_compute[235132]: 2025-10-10 10:13:30.755 2 DEBUG nova.compute.manager [None req-51315287-710b-4a50-af0e-b000cddec616 - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:13:30 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:31.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:32 np0005479822 nova_compute[235132]: 2025-10-10 10:13:32.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:32 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101333 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 06:13:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:33.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:13:34 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:13:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:34 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:35 np0005479822 nova_compute[235132]: 2025-10-10 10:13:35.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:35 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:35.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:37 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:37 np0005479822 nova_compute[235132]: 2025-10-10 10:13:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:37 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:37.855 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:13:37 np0005479822 nova_compute[235132]: 2025-10-10 10:13:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:37 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:37.857 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:13:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:37.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:38.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:39.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:40 np0005479822 nova_compute[235132]: 2025-10-10 10:13:40.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:40 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:41 np0005479822 podman[241183]: 2025-10-10 10:13:41.024806503 +0000 UTC m=+0.117047341 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:13:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.117 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.118 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.137 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.235 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.236 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.249 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.249 2 INFO nova.compute.claims [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.396 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:41 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:13:41 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1897484170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.840 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.849 2 DEBUG nova.compute.provider_tree [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.920 2 DEBUG nova.scheduler.client.report [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.950 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:41 np0005479822 nova_compute[235132]: 2025-10-10 10:13:41.952 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 06:13:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:41.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.010 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.011 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.045 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.075 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.193 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.195 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.196 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating image(s)#033[00m
Oct 10 06:13:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.208 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.237 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.284 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.327 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.334 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:42.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.418 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.420 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.422 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.422 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.471 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.478 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 12298a8d-d383-47da-91e4-0a918e153f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.796 2 DEBUG nova.policy [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:13:42 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:42 np0005479822 nova_compute[235132]: 2025-10-10 10:13:42.924 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 12298a8d-d383-47da-91e4-0a918e153f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.006 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 06:13:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.162 2 DEBUG nova.objects.instance [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.411 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.411 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Ensure instance console log exists: /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.412 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.412 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.413 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:43 np0005479822 nova_compute[235132]: 2025-10-10 10:13:43.834 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Successfully created port: 446b0e59-d2be-42d8-801f-7ba63ba76e66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:13:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101343 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:13:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:43.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:44.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:44.861 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:44 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.036 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Successfully updated port: 446b0e59-d2be-42d8-801f-7ba63ba76e66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.064 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.065 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.065 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:13:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.161 2 DEBUG nova.compute.manager [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.162 2 DEBUG nova.compute.manager [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.162 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.315 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 06:13:45 np0005479822 nova_compute[235132]: 2025-10-10 10:13:45.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.127 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.158 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.159 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance network_info: |[{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.160 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.160 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.168 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start _get_guest_xml network_info=[{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.174 2 WARNING nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.184 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.185 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.189 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.190 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.191 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.191 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.192 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.192 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.194 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.194 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.195 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.195 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.196 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.200 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:13:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1301345308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.724 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.764 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:46 np0005479822 nova_compute[235132]: 2025-10-10 10:13:46.769 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:46 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:13:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2845144114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.205 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.207 2 DEBUG nova.virt.libvirt.vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:13:42Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.207 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.208 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.210 2 DEBUG nova.objects.instance [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.225 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] End _get_guest_xml xml=<domain type="kvm">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <uuid>12298a8d-d383-47da-91e4-0a918e153f1d</uuid>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <name>instance-00000004</name>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <memory>131072</memory>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <vcpu>1</vcpu>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:name>tempest-TestNetworkBasicOps-server-742591551</nova:name>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:creationTime>2025-10-10 10:13:46</nova:creationTime>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:flavor name="m1.nano">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:memory>128</nova:memory>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:disk>1</nova:disk>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:swap>0</nova:swap>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:vcpus>1</nova:vcpus>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </nova:flavor>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:owner>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </nova:owner>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <nova:ports>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <nova:port uuid="446b0e59-d2be-42d8-801f-7ba63ba76e66">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        </nova:port>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </nova:ports>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </nova:instance>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <sysinfo type="smbios">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="manufacturer">RDO</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="product">OpenStack Compute</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="serial">12298a8d-d383-47da-91e4-0a918e153f1d</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="uuid">12298a8d-d383-47da-91e4-0a918e153f1d</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <entry name="family">Virtual Machine</entry>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <boot dev="hd"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <smbios mode="sysinfo"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <vmcoreinfo/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <clock offset="utc">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <timer name="hpet" present="no"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <cpu mode="host-model" match="exact">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <disk type="network" device="disk">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/12298a8d-d383-47da-91e4-0a918e153f1d_disk">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <target dev="vda" bus="virtio"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <disk type="network" device="cdrom">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/12298a8d-d383-47da-91e4-0a918e153f1d_disk.config">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <target dev="sda" bus="sata"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <interface type="ethernet">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <mac address="fa:16:3e:9d:f6:71"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <mtu size="1442"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <target dev="tap446b0e59-d2"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <serial type="pty">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <log file="/var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/console.log" append="off"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <input type="tablet" bus="usb"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <rng model="virtio">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <backend model="random">/dev/urandom</backend>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <controller type="usb" index="0"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    <memballoon model="virtio">
Oct 10 06:13:47 np0005479822 nova_compute[235132]:      <stats period="10"/>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:13:47 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:13:47 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:13:47 np0005479822 nova_compute[235132]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.227 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Preparing to wait for external event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.227 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.228 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.228 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.229 2 DEBUG nova.virt.libvirt.vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:13:42Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.229 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.230 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.230 2 DEBUG os_vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap446b0e59-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap446b0e59-d2, col_values=(('external_ids', {'iface-id': '446b0e59-d2be-42d8-801f-7ba63ba76e66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:f6:71', 'vm-uuid': '12298a8d-d383-47da-91e4-0a918e153f1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:47 np0005479822 NetworkManager[44982]: <info>  [1760091227.2397] manager: (tap446b0e59-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.249 2 INFO os_vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2')#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.324 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.325 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.326 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:9d:f6:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.326 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Using config drive#033[00m
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.364 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:47 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:47 np0005479822 nova_compute[235132]: 2025-10-10 10:13:47.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:47.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.001 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating config drive at /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.007 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdonamwlu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.136 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdonamwlu" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.171 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.177 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.199 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.200 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.226 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.347 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.348 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deleting local config drive /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config because it was imported into RBD.#033[00m
Oct 10 06:13:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:48 np0005479822 kernel: tap446b0e59-d2: entered promiscuous mode
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.4291] manager: (tap446b0e59-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 10 06:13:48 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:48Z|00057|binding|INFO|Claiming lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 for this chassis.
Oct 10 06:13:48 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:48Z|00058|binding|INFO|446b0e59-d2be-42d8-801f-7ba63ba76e66: Claiming fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 systemd-machined[191637]: New machine qemu-3-instance-00000004.
Oct 10 06:13:48 np0005479822 systemd-udevd[241532]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.4966] device (tap446b0e59-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.4976] device (tap446b0e59-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:13:48 np0005479822 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.530 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:f6:71 10.100.0.14'], port_security=['fa:16:3e:9d:f6:71 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12298a8d-d383-47da-91e4-0a918e153f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e222deba-0df5-4a21-bff7-930fc17b2ea1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21e2152f-e965-46e3-9774-988f8fdf189b, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=446b0e59-d2be-42d8-801f-7ba63ba76e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.531 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 446b0e59-d2be-42d8-801f-7ba63ba76e66 in datapath c8850c4c-dc38-4440-9c03-f2dd59684fe6 bound to our chassis#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.532 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8850c4c-dc38-4440-9c03-f2dd59684fe6#033[00m
Oct 10 06:13:48 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:48Z|00059|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 ovn-installed in OVS
Oct 10 06:13:48 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:48Z|00060|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 up in Southbound
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.552 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[cb86f869-7086-4dbe-9e6e-c2b89d22727f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.553 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8850c4c-d1 in ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.555 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8850c4c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.556 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[06cafb6a-8361-4ca7-9cb8-7a868cf23506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.556 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b16485-6585-4e95-be88-8d67e25701f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.571 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[69fd44c9-13fd-4024-a84d-52ada81088bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.596 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7f660a-b237-4c31-91ff-43a1dce7338d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.629 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[c91784a9-8261-475d-a48b-d47f8533ecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.636 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[17c3dc1e-1665-4d80-bedd-c9b231bd79ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.6371] manager: (tapc8850c4c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.686 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ae38ea-44b2-4621-b96e-977a58cf3285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.690 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[f222751c-cfc7-4ac6-a33b-bd9ce087b6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.7145] device (tapc8850c4c-d0): carrier: link connected
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.721 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[575988a0-5646-4639-9a0c-d7762df3729e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.742 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[cf800ab3-ec1e-4b54-be5a-a6c260535f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8850c4c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:14:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414242, 'reachable_time': 20031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241565, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.762 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc25817-26a9-4df1-b1c3-763e3484033a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:1444'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414242, 'tstamp': 414242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241567, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.783 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6affae5b-9f8b-403e-a091-059d932cacab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8850c4c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:14:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414242, 'reachable_time': 20031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241568, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.812 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[160e8f30-e392-418f-948e-aa3a6d285bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.878 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa860dd-d5d6-4fcc-857c-8900a4a9d64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.880 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8850c4c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.880 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.881 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8850c4c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:48 np0005479822 NetworkManager[44982]: <info>  [1760091228.8839] manager: (tapc8850c4c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 10 06:13:48 np0005479822 kernel: tapc8850c4c-d0: entered promiscuous mode
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.887 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8850c4c-d0, col_values=(('external_ids', {'iface-id': '185907ee-d118-486d-93ad-c5a1b6a3a149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:48 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:48Z|00061|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.895 2 DEBUG nova.compute.manager [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.896 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.896 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.897 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.897 2 DEBUG nova.compute.manager [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Processing event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 06:13:48 np0005479822 nova_compute[235132]: 2025-10-10 10:13:48.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.909 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.909 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8767d74-714a-40cf-9711-59cfe5109b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.911 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-c8850c4c-dc38-4440-9c03-f2dd59684fe6
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID c8850c4c-dc38-4440-9c03-f2dd59684fe6
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:13:48 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.912 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'env', 'PROCESS_TAG=haproxy-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8850c4c-dc38-4440-9c03-f2dd59684fe6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:13:49 np0005479822 nova_compute[235132]: 2025-10-10 10:13:49.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:49 np0005479822 podman[241600]: 2025-10-10 10:13:49.332305154 +0000 UTC m=+0.079300028 container create 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:13:49 np0005479822 podman[241600]: 2025-10-10 10:13:49.294024047 +0000 UTC m=+0.041018941 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:13:49 np0005479822 systemd[1]: Started libpod-conmon-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope.
Oct 10 06:13:49 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:13:49 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55767b3c231b50f03b73e902ec5a6e120dd175734d051879abbfb9aabc4097/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:13:49 np0005479822 podman[241600]: 2025-10-10 10:13:49.445983223 +0000 UTC m=+0.192978127 container init 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 06:13:49 np0005479822 podman[241600]: 2025-10-10 10:13:49.452528131 +0000 UTC m=+0.199523005 container start 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 06:13:49 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : New worker (241629) forked
Oct 10 06:13:49 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : Loading success.
Oct 10 06:13:49 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:49.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.125 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.125 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.1245322, 12298a8d-d383-47da-91e4-0a918e153f1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.126 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Started (Lifecycle Event)#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.134 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.138 2 INFO nova.virt.libvirt.driver [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance spawned successfully.#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.139 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.162 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.169 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.181 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.182 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.182 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.183 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.184 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.185 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.222 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.223 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.1280189, 12298a8d-d383-47da-91e4-0a918e153f1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.224 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Paused (Lifecycle Event)#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.264 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.270 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.133719, 12298a8d-d383-47da-91e4-0a918e153f1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.270 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Resumed (Lifecycle Event)#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.284 2 INFO nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 8.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.285 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.299 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.304 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.327 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.365 2 INFO nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 9.17 seconds to build instance.#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.386 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:50 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.977 2 DEBUG nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.978 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.978 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.979 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.979 2 DEBUG nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:13:50 np0005479822 nova_compute[235132]: 2025-10-10 10:13:50.980 2 WARNING nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received unexpected event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with vm_state active and task_state None.#033[00m
Oct 10 06:13:51 np0005479822 nova_compute[235132]: 2025-10-10 10:13:51.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:51 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:52.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.068 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.223 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.223 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.224 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.224 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:52 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:52Z|00062|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 06:13:52 np0005479822 NetworkManager[44982]: <info>  [1760091232.5414] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 10 06:13:52 np0005479822 NetworkManager[44982]: <info>  [1760091232.5434] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:52 np0005479822 ovn_controller[131749]: 2025-10-10T10:13:52Z|00063|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:52 np0005479822 nova_compute[235132]: 2025-10-10 10:13:52.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:52 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:53 np0005479822 nova_compute[235132]: 2025-10-10 10:13:53.056 2 DEBUG nova.compute.manager [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:13:53 np0005479822 nova_compute[235132]: 2025-10-10 10:13:53.057 2 DEBUG nova.compute.manager [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:13:53 np0005479822 nova_compute[235132]: 2025-10-10 10:13:53.057 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:13:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.625 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.648 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:54 np0005479822 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:13:54 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:54 np0005479822 podman[241677]: 2025-10-10 10:13:54.977421934 +0000 UTC m=+0.071485794 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 06:13:54 np0005479822 podman[241676]: 2025-10-10 10:13:54.992243869 +0000 UTC m=+0.082811824 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.072 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.072 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.073 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.073 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:13:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1438646568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.513 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.590 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.591 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:13:55 np0005479822 podman[241743]: 2025-10-10 10:13:55.672027765 +0000 UTC m=+0.108574239 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.755 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.756 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4763MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.756 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.757 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.828 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance 12298a8d-d383-47da-91e4-0a918e153f1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:13:55 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:55 np0005479822 nova_compute[235132]: 2025-10-10 10:13:55.878 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:13:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:13:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:13:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:13:56 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2710916285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:13:56 np0005479822 nova_compute[235132]: 2025-10-10 10:13:56.368 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:13:56 np0005479822 nova_compute[235132]: 2025-10-10 10:13:56.376 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:13:56 np0005479822 nova_compute[235132]: 2025-10-10 10:13:56.400 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:13:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:56 np0005479822 nova_compute[235132]: 2025-10-10 10:13:56.420 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:13:56 np0005479822 nova_compute[235132]: 2025-10-10 10:13:56.421 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:13:56 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.421 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:13:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:13:57 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.925 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.926 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:13:57 np0005479822 nova_compute[235132]: 2025-10-10 10:13:57.944 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:13:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:13:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:13:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:13:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:13:59 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:14:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:14:00 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:01 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:02.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:02 np0005479822 nova_compute[235132]: 2025-10-10 10:14:02.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:02 np0005479822 ovn_controller[131749]: 2025-10-10T10:14:02Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 06:14:02 np0005479822 ovn_controller[131749]: 2025-10-10T10:14:02Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 06:14:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:02 np0005479822 nova_compute[235132]: 2025-10-10 10:14:02.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:02 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:04 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:05 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:06 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 06:14:07 np0005479822 nova_compute[235132]: 2025-10-10 10:14:07.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:07 np0005479822 nova_compute[235132]: 2025-10-10 10:14:07.751 2 INFO nova.compute.manager [None req-80302de6-4e29-4b7e-86e2-2dcb26b7c51a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Get console output#033[00m
Oct 10 06:14:07 np0005479822 nova_compute[235132]: 2025-10-10 10:14:07.756 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 10 06:14:07 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004760 fd 39 proxy ignored for local
Oct 10 06:14:07 np0005479822 kernel: ganesha.nfsd[239748]: segfault at 50 ip 00007f53f66a832e sp 00007f53b7ffe210 error 4 in libntirpc.so.5.8[7f53f668d000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 06:14:07 np0005479822 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 06:14:07 np0005479822 nova_compute[235132]: 2025-10-10 10:14:07.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:07 np0005479822 systemd[1]: Started Process Core Dump (PID 241823/UID 0).
Oct 10 06:14:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:14:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:14:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:08.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:08 np0005479822 systemd-coredump[241824]: Process 238142 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 64:#012#0  0x00007f53f66a832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct 10 06:14:09 np0005479822 systemd[1]: systemd-coredump@12-241823-0.service: Deactivated successfully.
Oct 10 06:14:09 np0005479822 systemd[1]: systemd-coredump@12-241823-0.service: Consumed 1.136s CPU time.
Oct 10 06:14:09 np0005479822 podman[241829]: 2025-10-10 10:14:09.159994465 +0000 UTC m=+0.029738994 container died 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 06:14:09 np0005479822 systemd[1]: var-lib-containers-storage-overlay-f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc-merged.mount: Deactivated successfully.
Oct 10 06:14:09 np0005479822 podman[241829]: 2025-10-10 10:14:09.205670254 +0000 UTC m=+0.075414763 container remove 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 06:14:09 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 06:14:09 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 06:14:09 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.053s CPU time.
Oct 10 06:14:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:11 np0005479822 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG nova.compute.manager [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:14:11 np0005479822 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG nova.compute.manager [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:14:11 np0005479822 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:14:11 np0005479822 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:14:11 np0005479822 nova_compute[235132]: 2025-10-10 10:14:11.001 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:14:11 np0005479822 podman[241875]: 2025-10-10 10:14:11.971349449 +0000 UTC m=+0.068711829 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:14:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:12 np0005479822 nova_compute[235132]: 2025-10-10 10:14:12.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:12.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:12 np0005479822 nova_compute[235132]: 2025-10-10 10:14:12.795 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:14:12 np0005479822 nova_compute[235132]: 2025-10-10 10:14:12.796 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:14:12 np0005479822 nova_compute[235132]: 2025-10-10 10:14:12.818 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:14:12 np0005479822 nova_compute[235132]: 2025-10-10 10:14:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101413 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:14:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:17 np0005479822 nova_compute[235132]: 2025-10-10 10:14:17.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:17 np0005479822 nova_compute[235132]: 2025-10-10 10:14:17.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:18.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:19 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 13.
Oct 10 06:14:19 np0005479822 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:14:19 np0005479822 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.053s CPU time.
Oct 10 06:14:19 np0005479822 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 06:14:19 np0005479822 podman[241974]: 2025-10-10 10:14:19.989760629 +0000 UTC m=+0.079447274 container create d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 06:14:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:20 np0005479822 podman[241974]: 2025-10-10 10:14:19.954452993 +0000 UTC m=+0.044139678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:14:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 06:14:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:14:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:14:20 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 06:14:20 np0005479822 podman[241974]: 2025-10-10 10:14:20.108556347 +0000 UTC m=+0.198243372 container init d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 06:14:20 np0005479822 podman[241974]: 2025-10-10 10:14:20.11747771 +0000 UTC m=+0.207164325 container start d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 06:14:20 np0005479822 bash[241974]: d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596
Oct 10 06:14:20 np0005479822 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 06:14:20 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:20.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:22 np0005479822 nova_compute[235132]: 2025-10-10 10:14:22.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:22 np0005479822 nova_compute[235132]: 2025-10-10 10:14:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:26 np0005479822 podman[242036]: 2025-10-10 10:14:26.015568986 +0000 UTC m=+0.102966076 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 06:14:26 np0005479822 podman[242035]: 2025-10-10 10:14:26.019407521 +0000 UTC m=+0.110138472 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 06:14:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:26 np0005479822 podman[242037]: 2025-10-10 10:14:26.054377058 +0000 UTC m=+0.135017634 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Oct 10 06:14:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:26 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:27 np0005479822 nova_compute[235132]: 2025-10-10 10:14:27.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:27 np0005479822 nova_compute[235132]: 2025-10-10 10:14:27.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:31 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:31 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:32.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:32 np0005479822 nova_compute[235132]: 2025-10-10 10:14:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:32.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:32 np0005479822 nova_compute[235132]: 2025-10-10 10:14:32.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:34.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:35 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:14:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:36 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:36 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:14:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:14:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:36.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:37 np0005479822 nova_compute[235132]: 2025-10-10 10:14:37.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:37 np0005479822 nova_compute[235132]: 2025-10-10 10:14:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:38.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101439 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 06:14:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/101439 (4) : haproxy version is 2.3.17-d1c9119
Oct 10 06:14:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/101439 (4) : path to executable is /usr/local/sbin/haproxy
Oct 10 06:14:39 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [ALERT] 282/101439 (4) : backend 'backend' has no server available!
Oct 10 06:14:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:41 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:41 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:14:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:42.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.209 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.210 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:42 np0005479822 nova_compute[235132]: 2025-10-10 10:14:42.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:42 np0005479822 nova_compute[235132]: 2025-10-10 10:14:42.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:42 np0005479822 podman[242305]: 2025-10-10 10:14:42.966381 +0000 UTC m=+0.060977959 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 06:14:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:45 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:45.776 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:14:45 np0005479822 nova_compute[235132]: 2025-10-10 10:14:45.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:45.779 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:14:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.064699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287064733, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2348, "num_deletes": 251, "total_data_size": 6162003, "memory_usage": 6261280, "flush_reason": "Manual Compaction"}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287083850, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3994523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25994, "largest_seqno": 28337, "table_properties": {"data_size": 3985243, "index_size": 5774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19477, "raw_average_key_size": 20, "raw_value_size": 3966615, "raw_average_value_size": 4119, "num_data_blocks": 254, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091080, "oldest_key_time": 1760091080, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 19249 microseconds, and 9032 cpu microseconds.
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.083937) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3994523 bytes OK
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.083972) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092747) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092776) EVENT_LOG_v1 {"time_micros": 1760091287092767, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6151592, prev total WAL file size 6151592, number of live WAL files 2.
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.095547) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3900KB)], [51(11MB)]
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287095628, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16448022, "oldest_snapshot_seqno": -1}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5814 keys, 14326170 bytes, temperature: kUnknown
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287157599, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14326170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14286918, "index_size": 23590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147756, "raw_average_key_size": 25, "raw_value_size": 14181498, "raw_average_value_size": 2439, "num_data_blocks": 964, "num_entries": 5814, "num_filter_entries": 5814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.157918) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14326170 bytes
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.159388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.0 rd, 230.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6332, records dropped: 518 output_compression: NoCompression
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.159405) EVENT_LOG_v1 {"time_micros": 1760091287159396, "job": 30, "event": "compaction_finished", "compaction_time_micros": 62064, "compaction_time_cpu_micros": 41192, "output_level": 6, "num_output_files": 1, "total_output_size": 14326170, "num_input_records": 6332, "num_output_records": 5814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287160292, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287162449, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.095460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:14:47 np0005479822 nova_compute[235132]: 2025-10-10 10:14:47.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:47 np0005479822 nova_compute[235132]: 2025-10-10 10:14:47.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:49 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:49.784 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:14:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:51 np0005479822 nova_compute[235132]: 2025-10-10 10:14:51.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:51 np0005479822 nova_compute[235132]: 2025-10-10 10:14:51.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:52 np0005479822 nova_compute[235132]: 2025-10-10 10:14:52.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:52 np0005479822 nova_compute[235132]: 2025-10-10 10:14:52.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:52 np0005479822 nova_compute[235132]: 2025-10-10 10:14:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:14:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:52 np0005479822 nova_compute[235132]: 2025-10-10 10:14:52.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:52.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:52 np0005479822 nova_compute[235132]: 2025-10-10 10:14:52.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.193 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.196 2 INFO nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Terminating instance#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.196 2 DEBUG nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 06:14:53 np0005479822 kernel: tap446b0e59-d2 (unregistering): left promiscuous mode
Oct 10 06:14:53 np0005479822 NetworkManager[44982]: <info>  [1760091293.2488] device (tap446b0e59-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 ovn_controller[131749]: 2025-10-10T10:14:53Z|00064|binding|INFO|Releasing lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 from this chassis (sb_readonly=0)
Oct 10 06:14:53 np0005479822 ovn_controller[131749]: 2025-10-10T10:14:53Z|00065|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 down in Southbound
Oct 10 06:14:53 np0005479822 ovn_controller[131749]: 2025-10-10T10:14:53Z|00066|binding|INFO|Removing iface tap446b0e59-d2 ovn-installed in OVS
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.265 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:f6:71 10.100.0.14'], port_security=['fa:16:3e:9d:f6:71 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12298a8d-d383-47da-91e4-0a918e153f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e222deba-0df5-4a21-bff7-930fc17b2ea1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21e2152f-e965-46e3-9774-988f8fdf189b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=446b0e59-d2be-42d8-801f-7ba63ba76e66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.267 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 446b0e59-d2be-42d8-801f-7ba63ba76e66 in datapath c8850c4c-dc38-4440-9c03-f2dd59684fe6 unbound from our chassis#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.267 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8850c4c-dc38-4440-9c03-f2dd59684fe6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.273 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1d723388-9917-4a89-a30e-9bf4d877ad18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.274 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 namespace which is not needed anymore#033[00m
Oct 10 06:14:53 np0005479822 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 10 06:14:53 np0005479822 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 16.377s CPU time.
Oct 10 06:14:53 np0005479822 systemd-machined[191637]: Machine qemu-3-instance-00000004 terminated.
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : haproxy version is 2.8.14-c23fe91
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : path to executable is /usr/sbin/haproxy
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : Exiting Master process...
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : Exiting Master process...
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [ALERT]    (241620) : Current worker (241629) exited with code 143 (Terminated)
Oct 10 06:14:53 np0005479822 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : All workers exited. Exiting... (0)
Oct 10 06:14:53 np0005479822 systemd[1]: libpod-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope: Deactivated successfully.
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 conmon[241616]: conmon 45c916acf584203b7020 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope/container/memory.events
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 podman[242353]: 2025-10-10 10:14:53.42307073 +0000 UTC m=+0.052233613 container died 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.434 2 INFO nova.virt.libvirt.driver [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance destroyed successfully.#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.435 2 DEBUG nova.objects.instance [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:14:53 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2-userdata-shm.mount: Deactivated successfully.
Oct 10 06:14:53 np0005479822 systemd[1]: var-lib-containers-storage-overlay-7c55767b3c231b50f03b73e902ec5a6e120dd175734d051879abbfb9aabc4097-merged.mount: Deactivated successfully.
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.472 2 DEBUG nova.virt.libvirt.vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:13:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:13:50Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.474 2 DEBUG nova.network.os_vif_util [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.475 2 DEBUG nova.network.os_vif_util [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:14:53 np0005479822 podman[242353]: 2025-10-10 10:14:53.476606687 +0000 UTC m=+0.105769580 container cleanup 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.476 2 DEBUG os_vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap446b0e59-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.488 2 INFO os_vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2')#033[00m
Oct 10 06:14:53 np0005479822 systemd[1]: libpod-conmon-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope: Deactivated successfully.
Oct 10 06:14:53 np0005479822 podman[242397]: 2025-10-10 10:14:53.558695418 +0000 UTC m=+0.051512604 container remove 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.568 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c54ab7e9-06be-4564-a6a8-53ef45e36e5c]: (4, ('Fri Oct 10 10:14:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 (45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2)\n45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2\nFri Oct 10 10:14:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 (45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2)\n45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.571 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db67d3-0f3c-48cf-96b5-f6ab2a8374ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.572 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8850c4c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:14:53 np0005479822 kernel: tapc8850c4c-d0: left promiscuous mode
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.592 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[95a4b513-4db0-4109-8056-83dd8d557ea7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.613 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1f07d8-f92f-41a6-b189-223fa3805669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.615 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[94c943a5-cc2a-4986-987e-82c796536604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.628 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5cda8a65-cae0-4ac7-8c8e-a9a2fb1f6eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414233, 'reachable_time': 32950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242428, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 systemd[1]: run-netns-ovnmeta\x2dc8850c4c\x2ddc38\x2d4440\x2d9c03\x2df2dd59684fe6.mount: Deactivated successfully.
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.634 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:14:53 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.634 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f4ef2-93b4-4360-8101-e0b9a2cb7e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.913 2 INFO nova.virt.libvirt.driver [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deleting instance files /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d_del#033[00m
Oct 10 06:14:53 np0005479822 nova_compute[235132]: 2025-10-10 10:14:53.914 2 INFO nova.virt.libvirt.driver [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deletion of /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d_del complete#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:14:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.106 2 INFO nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.107 2 DEBUG oslo.service.loopingcall [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.107 2 DEBUG nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.108 2 DEBUG nova.network.neutron [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.158 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.158 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.319 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.321 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.321 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 06:14:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:54 np0005479822 nova_compute[235132]: 2025-10-10 10:14:54.995 2 DEBUG nova.network.neutron [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.013 2 INFO nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct 10 06:14:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:14:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s#012Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.063 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.064 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.117 2 DEBUG oslo_concurrency.processutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:14:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:14:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4254524373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.627 2 DEBUG oslo_concurrency.processutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.635 2 DEBUG nova.compute.provider_tree [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.663 2 DEBUG nova.scheduler.client.report [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.722 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.750 2 INFO nova.scheduler.client.report [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 12298a8d-d383-47da-91e4-0a918e153f1d#033[00m
Oct 10 06:14:55 np0005479822 nova_compute[235132]: 2025-10-10 10:14:55.852 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.440 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.440 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.441 2 WARNING nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received unexpected event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:14:56 np0005479822 nova_compute[235132]: 2025-10-10 10:14:56.442 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-deleted-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:14:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:56 np0005479822 podman[242454]: 2025-10-10 10:14:56.968799161 +0000 UTC m=+0.064768667 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 06:14:57 np0005479822 podman[242456]: 2025-10-10 10:14:57.004718836 +0000 UTC m=+0.088611881 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:14:57 np0005479822 podman[242455]: 2025-10-10 10:14:57.012056636 +0000 UTC m=+0.095365395 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.072 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:14:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:14:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:14:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/688430293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.533 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.773 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.774 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4888MB free_disk=59.94269943237305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.774 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.775 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.856 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.856 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.876 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:14:57 np0005479822 nova_compute[235132]: 2025-10-10 10:14:57.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:14:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:14:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:14:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:14:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:14:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:14:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:14:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2213807489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.323 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.331 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.349 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.380 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.381 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:14:58 np0005479822 nova_compute[235132]: 2025-10-10 10:14:58.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:14:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:14:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:14:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:14:59 np0005479822 nova_compute[235132]: 2025-10-10 10:14:59.383 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:02.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:02 np0005479822 nova_compute[235132]: 2025-10-10 10:15:02.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:03 np0005479822 nova_compute[235132]: 2025-10-10 10:15:03.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:03 np0005479822 nova_compute[235132]: 2025-10-10 10:15:03.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:03 np0005479822 nova_compute[235132]: 2025-10-10 10:15:03.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:08 np0005479822 nova_compute[235132]: 2025-10-10 10:15:08.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:08 np0005479822 nova_compute[235132]: 2025-10-10 10:15:08.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091293.430017, 12298a8d-d383-47da-91e4-0a918e153f1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:15:08 np0005479822 nova_compute[235132]: 2025-10-10 10:15:08.432 2 INFO nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Stopped (Lifecycle Event)#033[00m
Oct 10 06:15:08 np0005479822 nova_compute[235132]: 2025-10-10 10:15:08.458 2 DEBUG nova.compute.manager [None req-3b3f819e-d943-4e1b-87fa-e67f3245c99f - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:15:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:08 np0005479822 nova_compute[235132]: 2025-10-10 10:15:08.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:12.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:13 np0005479822 nova_compute[235132]: 2025-10-10 10:15:13.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:13 np0005479822 nova_compute[235132]: 2025-10-10 10:15:13.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:13 np0005479822 podman[242598]: 2025-10-10 10:15:13.98358317 +0000 UTC m=+0.081830465 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:15:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:16.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:18 np0005479822 nova_compute[235132]: 2025-10-10 10:15:18.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:18 np0005479822 nova_compute[235132]: 2025-10-10 10:15:18.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.190 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.190 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.208 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.296 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.297 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.305 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.305 2 INFO nova.compute.claims [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.427 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:19 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:15:19 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3825464269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.868 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.878 2 DEBUG nova.compute.provider_tree [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.907 2 DEBUG nova.scheduler.client.report [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.946 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.947 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.988 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 06:15:19 np0005479822 nova_compute[235132]: 2025-10-10 10:15:19.989 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.009 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.029 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 06:15:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.124 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.126 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.126 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating image(s)#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.158 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.184 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.210 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.213 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.266 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.294 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.297 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.602 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.684 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.796 2 DEBUG nova.objects.instance [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.819 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.820 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Ensure instance console log exists: /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.821 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.821 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.822 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:20 np0005479822 nova_compute[235132]: 2025-10-10 10:15:20.832 2 DEBUG nova.policy [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:15:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:15:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5385 writes, 28K keys, 5385 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5384 writes, 5384 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1518 writes, 7361 keys, 1518 commit groups, 1.0 writes per commit group, ingest: 16.90 MB, 0.03 MB/s#012Interval WAL: 1517 writes, 1517 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    145.5      0.30              0.15        15    0.020       0      0       0.0       0.0#012  L6      1/0   13.66 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    212.4    182.1      0.97              0.55        14    0.069     73K   7379       0.0       0.0#012 Sum      1/0   13.66 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    162.7    173.5      1.27              0.71        29    0.044     73K   7379       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    187.1    190.6      0.40              0.23        10    0.040     30K   2558       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    212.4    182.1      0.97              0.55        14    0.069     73K   7379       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    146.7      0.29              0.15        14    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.12 MB/s write, 0.20 GB read, 0.11 MB/s read, 1.3 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 17.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000125 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(932,16.87 MB,5.54813%) FilterBlock(29,219.23 KB,0.0704263%) IndexBlock(29,378.61 KB,0.121624%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 06:15:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:22.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:22 np0005479822 nova_compute[235132]: 2025-10-10 10:15:22.991 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully created port: 562e8418-d47e-4fd1-8a23-094e0ce40097 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:15:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.727 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully updated port: 562e8418-d47e-4fd1-8a23-094e0ce40097 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.746 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.746 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.747 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.829 2 DEBUG nova.compute.manager [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.830 2 DEBUG nova.compute.manager [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.831 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:23 np0005479822 nova_compute[235132]: 2025-10-10 10:15:23.908 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 06:15:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:24.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.047 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.076 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance network_info: |[{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.081 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start _get_guest_xml network_info=[{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.085 2 WARNING nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.091 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.092 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.100 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.100 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.101 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.102 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.103 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.103 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.105 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.105 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.106 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.106 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.107 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.112 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:25 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:15:25 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3004183643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.574 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.608 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:25 np0005479822 nova_compute[235132]: 2025-10-10 10:15:25.613 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:15:26 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3003324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.054 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.056 2 DEBUG nova.virt.libvirt.vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:15:20Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.057 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.059 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.061 2 DEBUG nova.objects.instance [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.085 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] End _get_guest_xml xml=<domain type="kvm">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <name>instance-00000006</name>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <memory>131072</memory>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <vcpu>1</vcpu>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:creationTime>2025-10-10 10:15:25</nova:creationTime>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:flavor name="m1.nano">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:memory>128</nova:memory>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:disk>1</nova:disk>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:swap>0</nova:swap>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:vcpus>1</nova:vcpus>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </nova:flavor>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:owner>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </nova:owner>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <nova:ports>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        </nova:port>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </nova:ports>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </nova:instance>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <sysinfo type="smbios">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="manufacturer">RDO</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="product">OpenStack Compute</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="serial">bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="uuid">bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <entry name="family">Virtual Machine</entry>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <boot dev="hd"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <smbios mode="sysinfo"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <vmcoreinfo/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <clock offset="utc">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <timer name="hpet" present="no"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <cpu mode="host-model" match="exact">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <disk type="network" device="disk">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <target dev="vda" bus="virtio"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <disk type="network" device="cdrom">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <target dev="sda" bus="sata"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <interface type="ethernet">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <mac address="fa:16:3e:73:fc:1f"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <mtu size="1442"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <target dev="tap562e8418-d4"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <serial type="pty">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <log file="/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log" append="off"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <input type="tablet" bus="usb"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <rng model="virtio">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <backend model="random">/dev/urandom</backend>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <controller type="usb" index="0"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    <memballoon model="virtio">
Oct 10 06:15:26 np0005479822 nova_compute[235132]:      <stats period="10"/>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:15:26 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:15:26 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:15:26 np0005479822 nova_compute[235132]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.087 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Preparing to wait for external event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.090 2 DEBUG nova.virt.libvirt.vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:15:20Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.090 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.091 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.092 2 DEBUG os_vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap562e8418-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap562e8418-d4, col_values=(('external_ids', {'iface-id': '562e8418-d47e-4fd1-8a23-094e0ce40097', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:fc:1f', 'vm-uuid': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:26 np0005479822 NetworkManager[44982]: <info>  [1760091326.1023] manager: (tap562e8418-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.113 2 INFO os_vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4')#033[00m
Oct 10 06:15:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.175 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.175 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.176 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:73:fc:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.176 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Using config drive#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.207 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:26.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.882 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating config drive at /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config#033[00m
Oct 10 06:15:26 np0005479822 nova_compute[235132]: 2025-10-10 10:15:26.887 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkq3vdkm8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.013 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkq3vdkm8" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.044 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.049 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.077 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.079 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.098 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.228 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.229 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deleting local config drive /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config because it was imported into RBD.#033[00m
Oct 10 06:15:27 np0005479822 kernel: tap562e8418-d4: entered promiscuous mode
Oct 10 06:15:27 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:27Z|00067|binding|INFO|Claiming lport 562e8418-d47e-4fd1-8a23-094e0ce40097 for this chassis.
Oct 10 06:15:27 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:27Z|00068|binding|INFO|562e8418-d47e-4fd1-8a23-094e0ce40097: Claiming fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.3119] manager: (tap562e8418-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.334 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:fc:1f 10.100.0.12'], port_security=['fa:16:3e:73:fc:1f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f14a6f9-41f9-49f8-b407-62ca2cdc0259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d717de-5083-46ba-b06e-f3ccc6cb202a, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=562e8418-d47e-4fd1-8a23-094e0ce40097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 562e8418-d47e-4fd1-8a23-094e0ce40097 in datapath ebfb122d-a6ca-4257-952a-e1a888448e1c bound to our chassis#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebfb122d-a6ca-4257-952a-e1a888448e1c#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.350 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0c7f0a-64cc-4888-b415-7e4a51d422ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.351 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebfb122d-a1 in ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.354 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebfb122d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.354 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[66e6985a-f200-450e-b963-27baf6ffe40e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.355 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9119c73f-d432-40e4-afb9-96f7dae9b0f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 systemd-machined[191637]: New machine qemu-4-instance-00000006.
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.379 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[47a015be-72bc-47f8-b687-db3d30d6bf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Oct 10 06:15:27 np0005479822 systemd-udevd[243002]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:27Z|00069|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 ovn-installed in OVS
Oct 10 06:15:27 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:27Z|00070|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 up in Southbound
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.409 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1f5903-82f6-41e6-a0c9-f3bf56b95ffd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.4223] device (tap562e8418-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.4231] device (tap562e8418-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:15:27 np0005479822 podman[242971]: 2025-10-10 10:15:27.441690106 +0000 UTC m=+0.079841890 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.443 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ec2ef3-c900-42e5-825e-8583c5dce16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.448 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a99448-38d8-4341-8593-3ea076f381a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.4493] manager: (tapebfb122d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 10 06:15:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:27 np0005479822 systemd-udevd[243018]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:15:27 np0005479822 podman[242970]: 2025-10-10 10:15:27.461222421 +0000 UTC m=+0.110402658 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.479 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[4335649e-524f-40e8-8a4c-7747062c3d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.482 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfcf517-9f3c-4342-bdf0-ea3f11470b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 podman[242973]: 2025-10-10 10:15:27.498247336 +0000 UTC m=+0.134587630 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.5028] device (tapebfb122d-a0): carrier: link connected
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.506 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3efe5f-7052-41f4-9d5a-92887fd0d804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.521 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[120fba4a-9b35-422e-b3c4-5cc5d48a5815]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebfb122d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424121, 'reachable_time': 37562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243068, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.534 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[55345305-584a-440b-8c9a-809cba5e6c6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:6451'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424121, 'tstamp': 424121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243069, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.549 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1299e3d2-6a98-48f6-bffe-b3a727aa14ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebfb122d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424121, 'reachable_time': 37562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243070, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.571 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[47f8afbd-8871-4ba5-914e-26892ff430c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.605 2 DEBUG nova.compute.manager [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG nova.compute.manager [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Processing event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.628 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6a91a37b-e509-4b9a-91ac-69b31d313215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.630 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebfb122d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.630 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.631 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebfb122d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 NetworkManager[44982]: <info>  [1760091327.6350] manager: (tapebfb122d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 10 06:15:27 np0005479822 kernel: tapebfb122d-a0: entered promiscuous mode
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.646 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebfb122d-a0, col_values=(('external_ids', {'iface-id': '318e6d8e-f58f-407d-854f-d27adc402b34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:27Z|00071|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 06:15:27 np0005479822 nova_compute[235132]: 2025-10-10 10:15:27.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.677 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.678 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f2ed1-b5b5-4eb4-b592-fd1b89f808b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.679 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-ebfb122d-a6ca-4257-952a-e1a888448e1c
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID ebfb122d-a6ca-4257-952a-e1a888448e1c
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:15:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.680 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'env', 'PROCESS_TAG=haproxy-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebfb122d-a6ca-4257-952a-e1a888448e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:15:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:28 np0005479822 podman[243144]: 2025-10-10 10:15:28.154883857 +0000 UTC m=+0.083213022 container create 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 06:15:28 np0005479822 podman[243144]: 2025-10-10 10:15:28.1119374 +0000 UTC m=+0.040266575 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:15:28 np0005479822 systemd[1]: Started libpod-conmon-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope.
Oct 10 06:15:28 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:15:28 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1835549180fe222ffd4c8fc7255dc61386526a292af096d5df92e7189879c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:28 np0005479822 podman[243144]: 2025-10-10 10:15:28.267964117 +0000 UTC m=+0.196293302 container init 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:15:28 np0005479822 podman[243144]: 2025-10-10 10:15:28.272683836 +0000 UTC m=+0.201012991 container start 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:15:28 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : New worker (243165) forked
Oct 10 06:15:28 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : Loading success.
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.523 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.524 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5224319, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.525 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Started (Lifecycle Event)#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.529 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.534 2 INFO nova.virt.libvirt.driver [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance spawned successfully.#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.535 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.559 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.567 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.575 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.576 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.578 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.578 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.579 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.580 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:15:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.589 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.589 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5238392, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.589 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Paused (Lifecycle Event)#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.615 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.618 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5283499, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.619 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Resumed (Lifecycle Event)#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.642 2 INFO nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 8.52 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.642 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.644 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.652 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.685 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.709 2 INFO nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 9.45 seconds to build instance.#033[00m
Oct 10 06:15:28 np0005479822 nova_compute[235132]: 2025-10-10 10:15:28.723 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.713 2 DEBUG nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.713 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:15:29 np0005479822 nova_compute[235132]: 2025-10-10 10:15:29.714 2 WARNING nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state active and task_state None.#033[00m
Oct 10 06:15:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:30.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:31 np0005479822 nova_compute[235132]: 2025-10-10 10:15:31.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:32 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:32Z|00072|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 06:15:32 np0005479822 NetworkManager[44982]: <info>  [1760091332.1507] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:32 np0005479822 NetworkManager[44982]: <info>  [1760091332.1517] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 10 06:15:32 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:32Z|00073|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:32.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.640 2 DEBUG nova.compute.manager [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.641 2 DEBUG nova.compute.manager [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.641 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.642 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:32 np0005479822 nova_compute[235132]: 2025-10-10 10:15:32.642 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:15:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:33 np0005479822 nova_compute[235132]: 2025-10-10 10:15:33.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:34 np0005479822 nova_compute[235132]: 2025-10-10 10:15:34.029 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:15:34 np0005479822 nova_compute[235132]: 2025-10-10 10:15:34.030 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:15:34 np0005479822 nova_compute[235132]: 2025-10-10 10:15:34.049 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:15:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:34.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:36 np0005479822 nova_compute[235132]: 2025-10-10 10:15:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:38 np0005479822 nova_compute[235132]: 2025-10-10 10:15:38.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:40 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 06:15:40 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 06:15:41 np0005479822 nova_compute[235132]: 2025-10-10 10:15:41.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.212 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:42 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:42 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.515268739 +0000 UTC m=+0.076397246 container create 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.467524569 +0000 UTC m=+0.028653066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:15:42 np0005479822 systemd[1]: Started libpod-conmon-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope.
Oct 10 06:15:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:42.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:42 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.630164759 +0000 UTC m=+0.191293296 container init 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.643616657 +0000 UTC m=+0.204745144 container start 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.647518454 +0000 UTC m=+0.208646981 container attach 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 06:15:42 np0005479822 awesome_fermat[243398]: 167 167
Oct 10 06:15:42 np0005479822 systemd[1]: libpod-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope: Deactivated successfully.
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.656741567 +0000 UTC m=+0.217870054 container died 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:15:42 np0005479822 systemd[1]: var-lib-containers-storage-overlay-a414090e3ef29d589aa25f38409169e23c7f2325f26c0ccb3a002c371e89aebf-merged.mount: Deactivated successfully.
Oct 10 06:15:42 np0005479822 podman[243381]: 2025-10-10 10:15:42.70135376 +0000 UTC m=+0.262482237 container remove 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 06:15:42 np0005479822 systemd[1]: libpod-conmon-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope: Deactivated successfully.
Oct 10 06:15:42 np0005479822 podman[243422]: 2025-10-10 10:15:42.954247833 +0000 UTC m=+0.082764860 container create 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 06:15:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:43 np0005479822 systemd[1]: Started libpod-conmon-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope.
Oct 10 06:15:43 np0005479822 podman[243422]: 2025-10-10 10:15:42.919289975 +0000 UTC m=+0.047807052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 06:15:43 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:15:43 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:43 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:43 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:43 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:43 np0005479822 podman[243422]: 2025-10-10 10:15:43.066023247 +0000 UTC m=+0.194540324 container init 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 06:15:43 np0005479822 podman[243422]: 2025-10-10 10:15:43.080242757 +0000 UTC m=+0.208759784 container start 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:15:43 np0005479822 podman[243422]: 2025-10-10 10:15:43.084768781 +0000 UTC m=+0.213285868 container attach 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 06:15:43 np0005479822 nova_compute[235132]: 2025-10-10 10:15:43.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]: [
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:    {
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "available": false,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "being_replaced": false,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "ceph_device_lvm": false,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "lsm_data": {},
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "lvs": [],
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "path": "/dev/sr0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "rejected_reasons": [
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "Insufficient space (<5GB)",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "Has a FileSystem"
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        ],
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        "sys_api": {
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "actuators": null,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "device_nodes": [
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:                "sr0"
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            ],
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "devname": "sr0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "human_readable_size": "482.00 KB",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "id_bus": "ata",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "model": "QEMU DVD-ROM",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "nr_requests": "2",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "parent": "/dev/sr0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "partitions": {},
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "path": "/dev/sr0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "removable": "1",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "rev": "2.5+",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "ro": "0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "rotational": "0",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "sas_address": "",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "sas_device_handle": "",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "scheduler_mode": "mq-deadline",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "sectors": 0,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "sectorsize": "2048",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "size": 493568.0,
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "support_discard": "2048",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "type": "disk",
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:            "vendor": "QEMU"
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:        }
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]:    }
Oct 10 06:15:44 np0005479822 condescending_hypatia[243438]: ]
Oct 10 06:15:44 np0005479822 systemd[1]: libpod-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope: Deactivated successfully.
Oct 10 06:15:44 np0005479822 podman[243422]: 2025-10-10 10:15:44.052480589 +0000 UTC m=+1.180997636 container died 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 06:15:44 np0005479822 systemd[1]: var-lib-containers-storage-overlay-0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46-merged.mount: Deactivated successfully.
Oct 10 06:15:44 np0005479822 podman[243422]: 2025-10-10 10:15:44.105154663 +0000 UTC m=+1.233671660 container remove 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 10 06:15:44 np0005479822 systemd[1]: libpod-conmon-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope: Deactivated successfully.
Oct 10 06:15:44 np0005479822 podman[244757]: 2025-10-10 10:15:44.155411991 +0000 UTC m=+0.073826654 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:15:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:44 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:15:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:44.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:46 np0005479822 nova_compute[235132]: 2025-10-10 10:15:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:46.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:47 np0005479822 nova_compute[235132]: 2025-10-10 10:15:47.269 2 INFO nova.compute.manager [None req-57b961d1-792e-4b4e-afa0-cfec45a9528e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Get console output#033[00m
Oct 10 06:15:47 np0005479822 nova_compute[235132]: 2025-10-10 10:15:47.277 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct 10 06:15:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:48 np0005479822 nova_compute[235132]: 2025-10-10 10:15:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:49 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:15:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:50.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:50 np0005479822 nova_compute[235132]: 2025-10-10 10:15:50.575 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:50 np0005479822 nova_compute[235132]: 2025-10-10 10:15:50.576 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:50 np0005479822 nova_compute[235132]: 2025-10-10 10:15:50.577 2 DEBUG nova.objects.instance [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:15:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:50.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:50 np0005479822 nova_compute[235132]: 2025-10-10 10:15:50.994 2 DEBUG nova.objects.instance [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:15:51 np0005479822 nova_compute[235132]: 2025-10-10 10:15:51.010 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:15:51 np0005479822 nova_compute[235132]: 2025-10-10 10:15:51.185 2 DEBUG nova.policy [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:15:51 np0005479822 nova_compute[235132]: 2025-10-10 10:15:51.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:51 np0005479822 nova_compute[235132]: 2025-10-10 10:15:51.820 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully created port: a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:15:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:52.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.202 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.227 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Triggering sync for uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.227 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.228 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.288 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:52 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.483 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:15:52 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.484 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:15:52 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.484 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.695 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully updated port: a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.711 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.712 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.712 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.808 2 DEBUG nova.compute.manager [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.809 2 DEBUG nova.compute.manager [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:15:52 np0005479822 nova_compute[235132]: 2025-10-10 10:15:52.809 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:53 np0005479822 nova_compute[235132]: 2025-10-10 10:15:53.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:53 np0005479822 nova_compute[235132]: 2025-10-10 10:15:53.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:53 np0005479822 nova_compute[235132]: 2025-10-10 10:15:53.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:53 np0005479822 nova_compute[235132]: 2025-10-10 10:15:53.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:53 np0005479822 nova_compute[235132]: 2025-10-10 10:15:53.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:15:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.060 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.061 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.079 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.546 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.573 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.574 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.574 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.578 2 DEBUG nova.virt.libvirt.vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.578 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.579 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.579 2 DEBUG os_vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6efe4ab-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6efe4ab-2a, col_values=(('external_ids', {'iface-id': 'a6efe4ab-2a26-46aa-8bf2-3dda99ea238c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:b1:45', 'vm-uuid': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.5898] manager: (tapa6efe4ab-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.600 2 INFO os_vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a')#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.601 2 DEBUG nova.virt.libvirt.vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.601 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.602 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.604 2 DEBUG nova.virt.libvirt.guest [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] attach device xml: <interface type="ethernet">
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <target dev="tapa6efe4ab-2a"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:15:55 np0005479822 nova_compute[235132]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct 10 06:15:55 np0005479822 kernel: tapa6efe4ab-2a: entered promiscuous mode
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.6254] manager: (tapa6efe4ab-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct 10 06:15:55 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:55Z|00074|binding|INFO|Claiming lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for this chassis.
Oct 10 06:15:55 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:55Z|00075|binding|INFO|a6efe4ab-2a26-46aa-8bf2-3dda99ea238c: Claiming fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.641 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b1:45 10.100.0.29'], port_security=['fa:16:3e:ae:b1:45 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.642 141156 INFO neutron.agent.ovn.metadata.agent [-] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 bound to our chassis#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.643 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.654 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5c552a08-c371-4cf6-996c-0da1878d09e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.656 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87f6394d-41 in ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.658 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87f6394d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.658 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[587f998f-b319-4892-8622-75f80a4acc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.659 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[751644b2-9258-4d14-89a6-d93b07dcd257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 systemd-udevd[244825]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:55Z|00076|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c ovn-installed in OVS
Oct 10 06:15:55 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:55Z|00077|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c up in Southbound
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.676 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[42710e81-3978-4576-b7d5-91c505852f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.6845] device (tapa6efe4ab-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.6857] device (tapa6efe4ab-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.701 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7a52e845-7b40-49da-a9c0-b18402cd55e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.732 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.733 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.732 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bbb61d-36af-4075-9ba8-6680ae253e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.733 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:73:fc:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.734 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:ae:b1:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.7381] manager: (tap87f6394d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Oct 10 06:15:55 np0005479822 systemd-udevd[244828]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.737 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cad625-673e-48ac-a1e7-d4ab66446b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.756 2 DEBUG nova.virt.libvirt.guest [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 06:15:55 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 06:15:55 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:15:55 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:15:55 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:15:55 np0005479822 nova_compute[235132]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.769 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[5090c819-26a2-43d5-809f-cf83364f21f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.772 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[7d939fde-d06c-4a26-ac1a-9b7d9aff574b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.793 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.7947] device (tap87f6394d-40): carrier: link connected
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.798 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea55b6e-0c89-4741-a33f-bf78a22a42cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.818 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[34bfea6d-600b-400b-b6eb-7976256195d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426950, 'reachable_time': 43037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244851, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.832 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[01df94ba-0a71-41d5-8f18-7eb886485059]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:68a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426950, 'tstamp': 426950}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244852, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.849 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fc97d-c7cc-44b5-a8e0-64a9b78cb3e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426950, 'reachable_time': 43037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244853, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.883 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5bd2b7-60a4-4719-a3bf-1031c2162937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.978 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd7c497-ba00-41f8-8164-9a333b2de116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.980 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.980 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.981 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87f6394d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 kernel: tap87f6394d-40: entered promiscuous mode
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 NetworkManager[44982]: <info>  [1760091355.9845] manager: (tap87f6394d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.992 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87f6394d-40, col_values=(('external_ids', {'iface-id': '25f0e25b-e08d-4c72-b1cf-e3d546e34451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.992 2 DEBUG nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.992 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.993 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.993 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.994 2 DEBUG nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:15:55 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:55Z|00078|binding|INFO|Releasing lport 25f0e25b-e08d-4c72-b1cf-e3d546e34451 from this chassis (sb_readonly=0)
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.994 2 WARNING nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.#033[00m
Oct 10 06:15:55 np0005479822 nova_compute[235132]: 2025-10-10 10:15:55.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:55 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.998 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.000 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8e93a820-c11c-4f84-b70e-2113bc9b1bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.001 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID 87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:15:56 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.004 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'env', 'PROCESS_TAG=haproxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87f6394d-4290-4eca-8ba0-18711f3ad6e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:15:56 np0005479822 nova_compute[235132]: 2025-10-10 10:15:56.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:56 np0005479822 nova_compute[235132]: 2025-10-10 10:15:56.057 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:56 np0005479822 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:15:56 np0005479822 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:15:56 np0005479822 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:15:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:56.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:56 np0005479822 podman[244885]: 2025-10-10 10:15:56.42917514 +0000 UTC m=+0.069186227 container create 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 10 06:15:56 np0005479822 systemd[1]: Started libpod-conmon-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope.
Oct 10 06:15:56 np0005479822 podman[244885]: 2025-10-10 10:15:56.393500023 +0000 UTC m=+0.033511160 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:15:56 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:15:56 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e01352dbf0d1bebdf46980d76e9074c40ab7243819392e4c65167a834fa151/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:15:56 np0005479822 podman[244885]: 2025-10-10 10:15:56.543556266 +0000 UTC m=+0.183567323 container init 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:15:56 np0005479822 podman[244885]: 2025-10-10 10:15:56.553253362 +0000 UTC m=+0.193264419 container start 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 06:15:56 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : New worker (244906) forked
Oct 10 06:15:56 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : Loading success.
Oct 10 06:15:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.148 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:15:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.676 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.677 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.701 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.702 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:15:57 np0005479822 nova_compute[235132]: 2025-10-10 10:15:57.703 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:15:57 np0005479822 podman[244917]: 2025-10-10 10:15:57.98897811 +0000 UTC m=+0.092161338 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 06:15:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:15:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:15:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:15:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:15:58 np0005479822 podman[244916]: 2025-10-10 10:15:58.005103362 +0000 UTC m=+0.103719204 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 10 06:15:58 np0005479822 podman[244918]: 2025-10-10 10:15:58.039921406 +0000 UTC m=+0.128513874 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 06:15:58 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:58Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 06:15:58 np0005479822 ovn_controller[131749]: 2025-10-10T10:15:58Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.128 2 DEBUG nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.129 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.129 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.130 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.130 2 DEBUG nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:15:58 np0005479822 nova_compute[235132]: 2025-10-10 10:15:58.131 2 WARNING nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.#033[00m
Oct 10 06:15:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:15:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:15:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:15:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:15:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:58.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:00 np0005479822 nova_compute[235132]: 2025-10-10 10:16:00.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.097 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.138 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.139 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.140 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.140 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.141 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.166 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.166 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.167 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:16:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:16:01 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/924314656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.664 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.752 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.753 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.986 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.988 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4710MB free_disk=59.942726135253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.988 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:16:01 np0005479822 nova_compute[235132]: 2025-10-10 10:16:01.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.175 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.176 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.176 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:16:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.270 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.357 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.358 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.387 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.428 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 06:16:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.470 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:16:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:02.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:16:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/369795556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.962 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.970 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:16:02 np0005479822 nova_compute[235132]: 2025-10-10 10:16:02.994 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:16:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:03 np0005479822 nova_compute[235132]: 2025-10-10 10:16:03.026 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:16:03 np0005479822 nova_compute[235132]: 2025-10-10 10:16:03.027 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:16:03 np0005479822 nova_compute[235132]: 2025-10-10 10:16:03.028 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:03 np0005479822 nova_compute[235132]: 2025-10-10 10:16:03.029 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 06:16:03 np0005479822 nova_compute[235132]: 2025-10-10 10:16:03.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:04.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:05 np0005479822 nova_compute[235132]: 2025-10-10 10:16:05.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:08 np0005479822 nova_compute[235132]: 2025-10-10 10:16:08.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:08.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:08.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:10 np0005479822 nova_compute[235132]: 2025-10-10 10:16:10.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:12.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:13 np0005479822 nova_compute[235132]: 2025-10-10 10:16:13.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:14 np0005479822 podman[245056]: 2025-10-10 10:16:14.956347997 +0000 UTC m=+0.059482611 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 06:16:15 np0005479822 nova_compute[235132]: 2025-10-10 10:16:15.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:16.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:18 np0005479822 nova_compute[235132]: 2025-10-10 10:16:18.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:20 np0005479822 nova_compute[235132]: 2025-10-10 10:16:20.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:23 np0005479822 nova_compute[235132]: 2025-10-10 10:16:23.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:25 np0005479822 nova_compute[235132]: 2025-10-10 10:16:25.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:28 np0005479822 nova_compute[235132]: 2025-10-10 10:16:28.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:28 np0005479822 podman[245107]: 2025-10-10 10:16:28.981022606 +0000 UTC m=+0.079922222 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 10 06:16:28 np0005479822 podman[245108]: 2025-10-10 10:16:28.991416791 +0000 UTC m=+0.083357906 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 10 06:16:29 np0005479822 podman[245109]: 2025-10-10 10:16:29.002642759 +0000 UTC m=+0.094330888 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:16:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:30 np0005479822 nova_compute[235132]: 2025-10-10 10:16:30.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.048 2 DEBUG nova.compute.manager [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.048 2 DEBUG nova.compute.manager [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.049 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.049 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.050 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:16:33 np0005479822 nova_compute[235132]: 2025-10-10 10:16:33.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:34 np0005479822 nova_compute[235132]: 2025-10-10 10:16:34.888 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:16:34 np0005479822 nova_compute[235132]: 2025-10-10 10:16:34.888 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:16:34 np0005479822 nova_compute[235132]: 2025-10-10 10:16:34.914 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:16:35 np0005479822 nova_compute[235132]: 2025-10-10 10:16:35.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:38 np0005479822 nova_compute[235132]: 2025-10-10 10:16:38.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:40 np0005479822 nova_compute[235132]: 2025-10-10 10:16:40.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.213 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:16:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:16:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:16:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:43 np0005479822 nova_compute[235132]: 2025-10-10 10:16:43.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:45 np0005479822 nova_compute[235132]: 2025-10-10 10:16:45.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:45 np0005479822 podman[245208]: 2025-10-10 10:16:45.988932215 +0000 UTC m=+0.085968057 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 10 06:16:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:48 np0005479822 nova_compute[235132]: 2025-10-10 10:16:48.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:48.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:50 np0005479822 nova_compute[235132]: 2025-10-10 10:16:50.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:16:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:53 np0005479822 nova_compute[235132]: 2025-10-10 10:16:53.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:53 np0005479822 nova_compute[235132]: 2025-10-10 10:16:53.971 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:53 np0005479822 nova_compute[235132]: 2025-10-10 10:16:53.972 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:53 np0005479822 nova_compute[235132]: 2025-10-10 10:16:53.972 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:16:54 np0005479822 nova_compute[235132]: 2025-10-10 10:16:54.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:54.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:55 np0005479822 nova_compute[235132]: 2025-10-10 10:16:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:55 np0005479822 nova_compute[235132]: 2025-10-10 10:16:55.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:16:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:56.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.880 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.881 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.881 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 10 06:16:56 np0005479822 nova_compute[235132]: 2025-10-10 10:16:56.882 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:16:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:16:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:16:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:16:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:16:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:16:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:16:58 np0005479822 nova_compute[235132]: 2025-10-10 10:16:58.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:16:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:16:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:16:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:16:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:16:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:16:59 np0005479822 podman[245366]: 2025-10-10 10:16:59.382240346 +0000 UTC m=+0.060149380 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:16:59 np0005479822 podman[245365]: 2025-10-10 10:16:59.43418575 +0000 UTC m=+0.105152274 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:16:59 np0005479822 podman[245367]: 2025-10-10 10:16:59.440917524 +0000 UTC m=+0.106423728 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 06:17:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.921 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.942 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.943 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.969 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:17:00 np0005479822 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:17:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:17:01 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/867703427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.439 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.516 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.517 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.714 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4736MB free_disk=59.89699172973633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.783 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.784 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.784 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:17:01 np0005479822 nova_compute[235132]: 2025-10-10 10:17:01.826 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:17:02 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:02Z|00079|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 10 06:17:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:17:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1641573569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.293 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.299 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.317 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.320 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.321 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:02 np0005479822 nova_compute[235132]: 2025-10-10 10:17:02.420 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:03 np0005479822 nova_compute[235132]: 2025-10-10 10:17:03.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:04.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.410275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424410379, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 257, "total_data_size": 4231648, "memory_usage": 4297216, "flush_reason": "Manual Compaction"}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424427122, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2743362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28342, "largest_seqno": 30013, "table_properties": {"data_size": 2736402, "index_size": 3967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14641, "raw_average_key_size": 19, "raw_value_size": 2722384, "raw_average_value_size": 3634, "num_data_blocks": 174, "num_entries": 749, "num_filter_entries": 749, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091288, "oldest_key_time": 1760091288, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16903 microseconds, and 9039 cpu microseconds.
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.427183) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2743362 bytes OK
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.427211) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428888) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428910) EVENT_LOG_v1 {"time_micros": 1760091424428903, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4223883, prev total WAL file size 4223883, number of live WAL files 2.
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.430801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2679KB)], [54(13MB)]
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424430850, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17069532, "oldest_snapshot_seqno": -1}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6031 keys, 16925428 bytes, temperature: kUnknown
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424518432, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 16925428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16882185, "index_size": 27069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153432, "raw_average_key_size": 25, "raw_value_size": 16770537, "raw_average_value_size": 2780, "num_data_blocks": 1111, "num_entries": 6031, "num_filter_entries": 6031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.519134) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16925428 bytes
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.520729) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.9 rd, 192.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.4) write-amplify(6.2) OK, records in: 6563, records dropped: 532 output_compression: NoCompression
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.520759) EVENT_LOG_v1 {"time_micros": 1760091424520746, "job": 32, "event": "compaction_finished", "compaction_time_micros": 88051, "compaction_time_cpu_micros": 59141, "output_level": 6, "num_output_files": 1, "total_output_size": 16925428, "num_input_records": 6563, "num_output_records": 6031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424522215, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424527680, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.430701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:17:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:05 np0005479822 nova_compute[235132]: 2025-10-10 10:17:05.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:06.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:08 np0005479822 nova_compute[235132]: 2025-10-10 10:17:08.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:08.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:10 np0005479822 nova_compute[235132]: 2025-10-10 10:17:10.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:13 np0005479822 nova_compute[235132]: 2025-10-10 10:17:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:13 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:13.206 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:17:13 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:13.206 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:17:13 np0005479822 nova_compute[235132]: 2025-10-10 10:17:13.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:15 np0005479822 nova_compute[235132]: 2025-10-10 10:17:15.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.225 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.226 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.245 2 DEBUG nova.objects.instance [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.269 2 DEBUG nova.virt.libvirt.vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.269 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.270 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.274 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.278 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.282 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Attempting to detach device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.283 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <target dev="tapa6efe4ab-2a"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 06:17:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.292 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.297 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface>not found in domain: <domain type='kvm' id='4'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <name>instance-00000006</name>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='serial'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='uuid'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk' index='2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config' index='1'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:73:fc:1f'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='tap562e8418-d4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:ae:b1:45'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='tapa6efe4ab-2a'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='net1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c141,c952</label>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c141,c952</imagelabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.299 2 INFO nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the persistent domain config.#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.300 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] (1/8): Attempting to detach device tapa6efe4ab-2a with device alias net1 from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.301 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <model type="virtio"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <mtu size="1442"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <target dev="tapa6efe4ab-2a"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </interface>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct 10 06:17:16 np0005479822 kernel: tapa6efe4ab-2a (unregistering): left promiscuous mode
Oct 10 06:17:16 np0005479822 NetworkManager[44982]: <info>  [1760091436.4164] device (tapa6efe4ab-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:16Z|00080|binding|INFO|Releasing lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c from this chassis (sb_readonly=0)
Oct 10 06:17:16 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:16Z|00081|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c down in Southbound
Oct 10 06:17:16 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:16Z|00082|binding|INFO|Removing iface tapa6efe4ab-2a ovn-installed in OVS
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.442 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b1:45 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.443 141156 INFO neutron.agent.ovn.metadata.agent [-] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 unbound from our chassis#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.445 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.446 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0d472758-0710-4bbf-a1ea-5948124b9529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.446 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace which is not needed anymore#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.472 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Received event <DeviceRemovedEvent: 1760091436.4693735, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.472 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Start waiting for the detach event from libvirt for device tapa6efe4ab-2a with device alias net1 for instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.473 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.482 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface>not found in domain: <domain type='kvm' id='4'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <name>instance-00000006</name>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <memory unit='KiB'>131072</memory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <vcpu placement='static'>1</vcpu>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <resource>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <partition>/machine</partition>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </resource>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <sysinfo type='smbios'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='manufacturer'>RDO</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='product'>OpenStack Compute</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='serial'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='uuid'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <entry name='family'>Virtual Machine</entry>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <boot dev='hd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <smbios mode='sysinfo'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <vmcoreinfo state='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <cpu mode='custom' match='exact' check='full'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <model fallback='forbid'>EPYC-Rome</model>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <vendor>AMD</vendor>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='x2apic'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc-deadline'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='hypervisor'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='tsc_adjust'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='spec-ctrl'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='stibp'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='arch-capabilities'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='cmp_legacy'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='overflow-recov'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='succor'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='ibrs'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='amd-ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='virt-ssbd'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='lbrv'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='tsc-scale'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='vmcb-clean'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='flushbyasid'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pause-filter'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='pfthreshold'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svme-addr-chk'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='lfence-always-serializing'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='rdctl-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='mds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='pschange-mc-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='gds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='rfds-no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='xsaves'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='svm'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='require' name='topoext'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='npt'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <feature policy='disable' name='nrip-save'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <clock offset='utc'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='pit' tickpolicy='delay'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='rtc' tickpolicy='catchup'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <timer name='hpet' present='no'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_poweroff>destroy</on_poweroff>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_reboot>restart</on_reboot>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <on_crash>destroy</on_crash>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <disk type='network' device='disk'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk' index='2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='vda' bus='virtio'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='virtio-disk0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <disk type='network' device='cdrom'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='qemu' type='raw' cache='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <auth username='openstack'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config' index='1'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.100' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.102' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <host name='192.168.122.101' port='6789'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='sda' bus='sata'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <readonly/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='sata0-0-0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='0' model='pcie-root'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pcie.0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='1' port='0x10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='2' port='0x11'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='3' port='0x12'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='4' port='0x13'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='5' port='0x14'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='6' port='0x15'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='7' port='0x16'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='8' port='0x17'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.8'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='9' port='0x18'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.9'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='10' port='0x19'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='11' port='0x1a'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.11'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='12' port='0x1b'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.12'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='13' port='0x1c'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.13'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='14' port='0x1d'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.14'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='15' port='0x1e'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.15'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='16' port='0x1f'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.16'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='17' port='0x20'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.17'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='18' port='0x21'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.18'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='19' port='0x22'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.19'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='20' port='0x23'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.20'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='21' port='0x24'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.21'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='22' port='0x25'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.22'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='23' port='0x26'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.23'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='24' port='0x27'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.24'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-root-port'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target chassis='25' port='0x28'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.25'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model name='pcie-pci-bridge'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='pci.26'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='usb'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <controller type='sata' index='0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='ide'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </controller>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <interface type='ethernet'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mac address='fa:16:3e:73:fc:1f'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target dev='tap562e8418-d4'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model type='virtio'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <driver name='vhost' rx_queue_size='512'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <mtu size='1442'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='net0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <serial type='pty'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target type='isa-serial' port='0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:        <model name='isa-serial'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      </target>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <console type='pty' tty='/dev/pts/0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <source path='/dev/pts/0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <target type='serial' port='0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='serial0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </console>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='tablet' bus='usb'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='usb' bus='0' port='1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='mouse' bus='ps2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input1'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <input type='keyboard' bus='ps2'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='input2'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </input>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <listen type='address' address='::0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </graphics>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <audio id='1' type='none'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <model type='virtio' heads='1' primary='yes'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='video0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <watchdog model='itco' action='reset'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='watchdog0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </watchdog>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <memballoon model='virtio'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <stats period='10'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='balloon0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <rng model='virtio'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <backend model='random'>/dev/urandom</backend>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <alias name='rng0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <label>system_u:system_r:svirt_t:s0:c141,c952</label>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c141,c952</imagelabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <label>+107:+107</label>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <imagelabel>+107:+107</imagelabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </seclabel>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.483 2 INFO nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the live domain config.#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.484 2 DEBUG nova.virt.libvirt.vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.484 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.485 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.486 2 DEBUG os_vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6efe4ab-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.497 2 INFO os_vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a')#033[00m
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.497 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:creationTime>2025-10-10 10:17:16</nova:creationTime>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:flavor name="m1.nano">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:memory>128</nova:memory>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:disk>1</nova:disk>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:swap>0</nova:swap>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:vcpus>1</nova:vcpus>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:flavor>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:owner>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  <nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 06:17:16 np0005479822 nova_compute[235132]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:    </nova:port>
Oct 10 06:17:16 np0005479822 nova_compute[235132]:  </nova:ports>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: </nova:instance>
Oct 10 06:17:16 np0005479822 nova_compute[235132]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct 10 06:17:16 np0005479822 podman[245483]: 2025-10-10 10:17:16.564501355 +0000 UTC m=+0.106834509 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 06:17:16 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : haproxy version is 2.8.14-c23fe91
Oct 10 06:17:16 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : path to executable is /usr/sbin/haproxy
Oct 10 06:17:16 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [WARNING]  (244904) : Exiting Master process...
Oct 10 06:17:16 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [ALERT]    (244904) : Current worker (244906) exited with code 143 (Terminated)
Oct 10 06:17:16 np0005479822 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [WARNING]  (244904) : All workers exited. Exiting... (0)
Oct 10 06:17:16 np0005479822 systemd[1]: libpod-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope: Deactivated successfully.
Oct 10 06:17:16 np0005479822 podman[245523]: 2025-10-10 10:17:16.623113932 +0000 UTC m=+0.054495885 container died 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:17:16 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050-userdata-shm.mount: Deactivated successfully.
Oct 10 06:17:16 np0005479822 systemd[1]: var-lib-containers-storage-overlay-e0e01352dbf0d1bebdf46980d76e9074c40ab7243819392e4c65167a834fa151-merged.mount: Deactivated successfully.
Oct 10 06:17:16 np0005479822 podman[245523]: 2025-10-10 10:17:16.668789504 +0000 UTC m=+0.100171447 container cleanup 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 06:17:16 np0005479822 systemd[1]: libpod-conmon-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope: Deactivated successfully.
Oct 10 06:17:16 np0005479822 podman[245554]: 2025-10-10 10:17:16.749749964 +0000 UTC m=+0.050596358 container remove 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.759 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[52e347b4-fac0-4510-ab7e-ffb24ca5b0e0]: (4, ('Fri Oct 10 10:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050)\n24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050\nFri Oct 10 10:17:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050)\n24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.761 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[abef3c15-6ac0-43a2-8957-f1dacda13853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.762 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:17:16 np0005479822 kernel: tap87f6394d-40: left promiscuous mode
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:16 np0005479822 nova_compute[235132]: 2025-10-10 10:17:16.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.792 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[79221226-4bc7-4cf9-b9f5-126616bb35f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.833 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[16e40510-e8e4-43fa-b8be-63729acc68eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.835 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[99210dc1-9962-4362-9888-3fc26e84ae9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.859 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a334b838-be0c-4206-9f14-7e462e3a6a68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426943, 'reachable_time': 41874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245569, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.863 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:17:16 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.863 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[140656c6-7a47-476b-9322-200093305d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:16 np0005479822 systemd[1]: run-netns-ovnmeta\x2d87f6394d\x2d4290\x2d4eca\x2d8ba0\x2d18711f3ad6e0.mount: Deactivated successfully.
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.020 2 DEBUG nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.020 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.021 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.021 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.022 2 DEBUG nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.022 2 WARNING nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:17:17 np0005479822 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:17:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:18 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:18Z|00083|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.208 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:17:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.491 2 INFO nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.492 2 DEBUG nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.515 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.543 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.compute.manager [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.compute.manager [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:17:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.824 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.824 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.827 2 INFO nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Terminating instance#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.828 2 DEBUG nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 06:17:18 np0005479822 kernel: tap562e8418-d4 (unregistering): left promiscuous mode
Oct 10 06:17:18 np0005479822 NetworkManager[44982]: <info>  [1760091438.8862] device (tap562e8418-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:17:18 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:18Z|00084|binding|INFO|Releasing lport 562e8418-d47e-4fd1-8a23-094e0ce40097 from this chassis (sb_readonly=0)
Oct 10 06:17:18 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:18Z|00085|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 down in Southbound
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:18 np0005479822 ovn_controller[131749]: 2025-10-10T10:17:18Z|00086|binding|INFO|Removing iface tap562e8418-d4 ovn-installed in OVS
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.910 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:fc:1f 10.100.0.12'], port_security=['fa:16:3e:73:fc:1f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f14a6f9-41f9-49f8-b407-62ca2cdc0259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d717de-5083-46ba-b06e-f3ccc6cb202a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=562e8418-d47e-4fd1-8a23-094e0ce40097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.913 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 562e8418-d47e-4fd1-8a23-094e0ce40097 in datapath ebfb122d-a6ca-4257-952a-e1a888448e1c unbound from our chassis#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.914 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebfb122d-a6ca-4257-952a-e1a888448e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.915 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3750a1de-3d1a-4491-a366-23a5b11d0170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.916 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c namespace which is not needed anymore#033[00m
Oct 10 06:17:18 np0005479822 nova_compute[235132]: 2025-10-10 10:17:18.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:18 np0005479822 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 10 06:17:18 np0005479822 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 20.122s CPU time.
Oct 10 06:17:18 np0005479822 systemd-machined[191637]: Machine qemu-4-instance-00000006 terminated.
Oct 10 06:17:19 np0005479822 NetworkManager[44982]: <info>  [1760091439.0489] manager: (tap562e8418-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.074 2 INFO nova.virt.libvirt.driver [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance destroyed successfully.#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.075 2 DEBUG nova.objects.instance [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : haproxy version is 2.8.14-c23fe91
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : path to executable is /usr/sbin/haproxy
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : Exiting Master process...
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : Exiting Master process...
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [ALERT]    (243163) : Current worker (243165) exited with code 143 (Terminated)
Oct 10 06:17:19 np0005479822 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : All workers exited. Exiting... (0)
Oct 10 06:17:19 np0005479822 systemd[1]: libpod-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope: Deactivated successfully.
Oct 10 06:17:19 np0005479822 podman[245593]: 2025-10-10 10:17:19.09695412 +0000 UTC m=+0.055528134 container died 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.103 2 DEBUG nova.virt.libvirt.vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.103 2 DEBUG nova.network.os_vif_util [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.104 2 DEBUG nova.network.os_vif_util [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.105 2 DEBUG os_vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap562e8418-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.112 2 INFO os_vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4')#033[00m
Oct 10 06:17:19 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69-userdata-shm.mount: Deactivated successfully.
Oct 10 06:17:19 np0005479822 systemd[1]: var-lib-containers-storage-overlay-3b1835549180fe222ffd4c8fc7255dc61386526a292af096d5df92e7189879c8-merged.mount: Deactivated successfully.
Oct 10 06:17:19 np0005479822 podman[245593]: 2025-10-10 10:17:19.138059777 +0000 UTC m=+0.096633771 container cleanup 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.151 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.151 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:19 np0005479822 systemd[1]: libpod-conmon-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope: Deactivated successfully.
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.152 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.152 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.153 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.153 2 WARNING nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state deleting.#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.154 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-deleted-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:19 np0005479822 podman[245646]: 2025-10-10 10:17:19.208551149 +0000 UTC m=+0.045787787 container remove 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.214 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0b068601-a48f-458f-aa6a-d8116001be85]: (4, ('Fri Oct 10 10:17:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c (35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69)\n35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69\nFri Oct 10 10:17:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c (35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69)\n35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.216 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[66a47aa3-8980-4a22-9f93-7e2f65178c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.218 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebfb122d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 kernel: tapebfb122d-a0: left promiscuous mode
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.227 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[04a0bc56-2fee-44f3-925f-024c253da9c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.255 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[37a521a4-d6c9-4159-baee-2d9296979ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.257 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[dab8f559-26ec-41f3-b010-ab6ba8cf4e70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.278 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[17e73ef2-5fc0-4843-837f-fb22759d296f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424115, 'reachable_time': 24479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245664, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.281 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:17:19 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.281 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[666a0a7a-bd6d-464d-8381-4daa7e263b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:17:19 np0005479822 systemd[1]: run-netns-ovnmeta\x2debfb122d\x2da6ca\x2d4257\x2d952a\x2de1a888448e1c.mount: Deactivated successfully.
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.619 2 INFO nova.virt.libvirt.driver [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deleting instance files /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_del#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.621 2 INFO nova.virt.libvirt.driver [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deletion of /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_del complete#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.689 2 INFO nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG oslo.service.loopingcall [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG nova.network.neutron [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.944 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.945 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:17:19 np0005479822 nova_compute[235132]: 2025-10-10 10:17:19.964 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:17:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.526 2 DEBUG nova.network.neutron [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.548 2 INFO nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 0.86 seconds to deallocate network for instance.#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.601 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.602 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.694 2 DEBUG oslo_concurrency.processutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:17:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.826 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.827 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.828 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.828 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.829 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.830 2 WARNING nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.830 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.831 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.832 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.832 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.833 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.833 2 WARNING nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:17:20 np0005479822 nova_compute[235132]: 2025-10-10 10:17:20.834 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-deleted-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:17:21 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:17:21 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3931265671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.178 2 DEBUG oslo_concurrency.processutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.185 2 DEBUG nova.compute.provider_tree [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.211 2 DEBUG nova.scheduler.client.report [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.246 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.281 2 INFO nova.scheduler.client.report [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107#033[00m
Oct 10 06:17:21 np0005479822 nova_compute[235132]: 2025-10-10 10:17:21.377 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:23 np0005479822 nova_compute[235132]: 2025-10-10 10:17:23.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:24 np0005479822 nova_compute[235132]: 2025-10-10 10:17:24.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:26.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:27 np0005479822 nova_compute[235132]: 2025-10-10 10:17:27.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:27 np0005479822 nova_compute[235132]: 2025-10-10 10:17:27.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:27 np0005479822 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 06:17:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:28 np0005479822 nova_compute[235132]: 2025-10-10 10:17:28.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:17:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:17:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:28.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:29 np0005479822 nova_compute[235132]: 2025-10-10 10:17:29.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:29 np0005479822 podman[245722]: 2025-10-10 10:17:29.991972582 +0000 UTC m=+0.084992381 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 06:17:30 np0005479822 podman[245721]: 2025-10-10 10:17:30.017375378 +0000 UTC m=+0.111108977 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:17:30 np0005479822 podman[245723]: 2025-10-10 10:17:30.035035782 +0000 UTC m=+0.113401549 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 06:17:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:30.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:32.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:33 np0005479822 nova_compute[235132]: 2025-10-10 10:17:33.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:34 np0005479822 nova_compute[235132]: 2025-10-10 10:17:34.072 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091439.0712006, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:17:34 np0005479822 nova_compute[235132]: 2025-10-10 10:17:34.073 2 INFO nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Stopped (Lifecycle Event)#033[00m
Oct 10 06:17:34 np0005479822 nova_compute[235132]: 2025-10-10 10:17:34.093 2 DEBUG nova.compute.manager [None req-26863534-1111-457a-810e-ee01ebc734ec - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:17:34 np0005479822 nova_compute[235132]: 2025-10-10 10:17:34.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:34.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:36.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:38 np0005479822 nova_compute[235132]: 2025-10-10 10:17:38.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:39 np0005479822 nova_compute[235132]: 2025-10-10 10:17:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:17:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:17:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.215 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:17:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:43 np0005479822 nova_compute[235132]: 2025-10-10 10:17:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:44 np0005479822 nova_compute[235132]: 2025-10-10 10:17:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:46 np0005479822 podman[245817]: 2025-10-10 10:17:46.981178911 +0000 UTC m=+0.078577945 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 06:17:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:48 np0005479822 nova_compute[235132]: 2025-10-10 10:17:48.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:48.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:49 np0005479822 nova_compute[235132]: 2025-10-10 10:17:49.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:50.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:52.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:53 np0005479822 nova_compute[235132]: 2025-10-10 10:17:53.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:53 np0005479822 nova_compute[235132]: 2025-10-10 10:17:53.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:17:53 np0005479822 nova_compute[235132]: 2025-10-10 10:17:53.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:54 np0005479822 nova_compute[235132]: 2025-10-10 10:17:54.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:54 np0005479822 nova_compute[235132]: 2025-10-10 10:17:54.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:54.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:55 np0005479822 nova_compute[235132]: 2025-10-10 10:17:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.065 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.065 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:57 np0005479822 nova_compute[235132]: 2025-10-10 10:17:57.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:17:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:17:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:17:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:17:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:17:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:17:58 np0005479822 nova_compute[235132]: 2025-10-10 10:17:58.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:17:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:17:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:17:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:17:58 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:17:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:17:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:17:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:17:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:17:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:17:59 np0005479822 nova_compute[235132]: 2025-10-10 10:17:59.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:00 np0005479822 nova_compute[235132]: 2025-10-10 10:18:00.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 06:18:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 06:18:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:00 np0005479822 podman[245951]: 2025-10-10 10:18:00.989251023 +0000 UTC m=+0.082110462 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:18:00 np0005479822 podman[245950]: 2025-10-10 10:18:00.993390757 +0000 UTC m=+0.084905399 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 06:18:01 np0005479822 podman[245952]: 2025-10-10 10:18:01.035270924 +0000 UTC m=+0.122074177 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 06:18:01 np0005479822 nova_compute[235132]: 2025-10-10 10:18:01.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.080 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.080 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:18:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:02.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:18:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3550810255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.567 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.771 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.773 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4914MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.773 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.774 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:18:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.853 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.854 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:18:02 np0005479822 nova_compute[235132]: 2025-10-10 10:18:02.873 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:18:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:18:03 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3370266523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:18:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:18:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.381 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.388 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.410 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.442 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:18:03 np0005479822 nova_compute[235132]: 2025-10-10 10:18:03.442 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:18:04 np0005479822 nova_compute[235132]: 2025-10-10 10:18:04.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:04.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:06 np0005479822 ovn_controller[131749]: 2025-10-10T10:18:06Z|00087|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 10 06:18:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:08 np0005479822 nova_compute[235132]: 2025-10-10 10:18:08.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:09 np0005479822 nova_compute[235132]: 2025-10-10 10:18:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.165030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491165066, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1181, "num_deletes": 501, "total_data_size": 1951641, "memory_usage": 1978856, "flush_reason": "Manual Compaction"}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491175369, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 897365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30018, "largest_seqno": 31194, "table_properties": {"data_size": 893126, "index_size": 1379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13960, "raw_average_key_size": 19, "raw_value_size": 882191, "raw_average_value_size": 1232, "num_data_blocks": 61, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091425, "oldest_key_time": 1760091425, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10406 microseconds, and 4129 cpu microseconds.
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.175430) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 897365 bytes OK
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.175459) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177276) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177297) EVENT_LOG_v1 {"time_micros": 1760091491177290, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177350) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1944925, prev total WAL file size 1944925, number of live WAL files 2.
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.179004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(876KB)], [57(16MB)]
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491179064, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 17822793, "oldest_snapshot_seqno": -1}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5754 keys, 12048907 bytes, temperature: kUnknown
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491252376, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12048907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12012884, "index_size": 20553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148801, "raw_average_key_size": 25, "raw_value_size": 11911331, "raw_average_value_size": 2070, "num_data_blocks": 824, "num_entries": 5754, "num_filter_entries": 5754, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252757) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12048907 bytes
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.256452) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.7 rd, 164.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(33.3) write-amplify(13.4) OK, records in: 6747, records dropped: 993 output_compression: NoCompression
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.256481) EVENT_LOG_v1 {"time_micros": 1760091491256467, "job": 34, "event": "compaction_finished", "compaction_time_micros": 73428, "compaction_time_cpu_micros": 50731, "output_level": 6, "num_output_files": 1, "total_output_size": 12048907, "num_input_records": 6747, "num_output_records": 5754, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491256849, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491260610, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.178890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.261003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:11 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.261007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:18:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:13 np0005479822 nova_compute[235132]: 2025-10-10 10:18:13.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:14 np0005479822 nova_compute[235132]: 2025-10-10 10:18:14.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:17 np0005479822 podman[246094]: 2025-10-10 10:18:17.967282915 +0000 UTC m=+0.071940914 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:18:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:18.024 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:18:18 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:18.026 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:18:18 np0005479822 nova_compute[235132]: 2025-10-10 10:18:18.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:18 np0005479822 nova_compute[235132]: 2025-10-10 10:18:18.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:19 np0005479822 nova_compute[235132]: 2025-10-10 10:18:19.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:18:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:18:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:23 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:23.028 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:18:23 np0005479822 nova_compute[235132]: 2025-10-10 10:18:23.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:24 np0005479822 nova_compute[235132]: 2025-10-10 10:18:24.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:18:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:18:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:28 np0005479822 nova_compute[235132]: 2025-10-10 10:18:28.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:29 np0005479822 nova_compute[235132]: 2025-10-10 10:18:29.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:31 np0005479822 podman[246146]: 2025-10-10 10:18:31.987304786 +0000 UTC m=+0.082534153 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:18:31 np0005479822 podman[246145]: 2025-10-10 10:18:31.988959272 +0000 UTC m=+0.082771131 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 06:18:32 np0005479822 podman[246147]: 2025-10-10 10:18:32.043014473 +0000 UTC m=+0.122535360 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 10 06:18:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:33 np0005479822 nova_compute[235132]: 2025-10-10 10:18:33.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:34 np0005479822 nova_compute[235132]: 2025-10-10 10:18:34.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:18:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:18:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:34.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:36.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:38 np0005479822 nova_compute[235132]: 2025-10-10 10:18:38.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:39 np0005479822 nova_compute[235132]: 2025-10-10 10:18:39.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.216 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:18:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.216 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:18:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.217 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:18:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:43 np0005479822 nova_compute[235132]: 2025-10-10 10:18:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:44 np0005479822 nova_compute[235132]: 2025-10-10 10:18:44.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:44.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:48 np0005479822 nova_compute[235132]: 2025-10-10 10:18:48.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:48.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:48 np0005479822 podman[246243]: 2025-10-10 10:18:48.978198447 +0000 UTC m=+0.072999972 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 06:18:49 np0005479822 nova_compute[235132]: 2025-10-10 10:18:49.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:50.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:52.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:53 np0005479822 nova_compute[235132]: 2025-10-10 10:18:53.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:54 np0005479822 nova_compute[235132]: 2025-10-10 10:18:54.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:54.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:55 np0005479822 nova_compute[235132]: 2025-10-10 10:18:55.444 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:55 np0005479822 nova_compute[235132]: 2025-10-10 10:18:55.445 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:55 np0005479822 nova_compute[235132]: 2025-10-10 10:18:55.445 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:18:56 np0005479822 nova_compute[235132]: 2025-10-10 10:18:56.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:56.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:18:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:56.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:18:57 np0005479822 nova_compute[235132]: 2025-10-10 10:18:57.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:57 np0005479822 nova_compute[235132]: 2025-10-10 10:18:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:18:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:18:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:18:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:18:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.068 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:18:58 np0005479822 nova_compute[235132]: 2025-10-10 10:18:58.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:18:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:58.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:18:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:18:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:58.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:18:59 np0005479822 nova_compute[235132]: 2025-10-10 10:18:59.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:00.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.075 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.077 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:19:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:02.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:19:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3651707172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.552 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.704 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.706 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4910MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.706 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.707 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.776 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.777 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:19:02 np0005479822 nova_compute[235132]: 2025-10-10 10:19:02.818 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:19:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:02 np0005479822 podman[246318]: 2025-10-10 10:19:02.970244093 +0000 UTC m=+0.066679049 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 10 06:19:02 np0005479822 podman[246319]: 2025-10-10 10:19:02.981612584 +0000 UTC m=+0.068986031 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 10 06:19:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:03 np0005479822 podman[246325]: 2025-10-10 10:19:03.011835783 +0000 UTC m=+0.092133307 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:19:03 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4015442307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.294 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.302 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.317 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.319 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:19:03 np0005479822 nova_compute[235132]: 2025-10-10 10:19:03.320 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:19:03 np0005479822 podman[246523]: 2025-10-10 10:19:03.784866005 +0000 UTC m=+0.060847300 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 06:19:03 np0005479822 podman[246523]: 2025-10-10 10:19:03.911943008 +0000 UTC m=+0.187924283 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:19:04 np0005479822 nova_compute[235132]: 2025-10-10 10:19:04.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:04 np0005479822 podman[246641]: 2025-10-10 10:19:04.409584621 +0000 UTC m=+0.063033090 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:19:04 np0005479822 podman[246641]: 2025-10-10 10:19:04.421870257 +0000 UTC m=+0.075318726 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:19:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 06:19:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:04.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 06:19:04 np0005479822 podman[246733]: 2025-10-10 10:19:04.815000645 +0000 UTC m=+0.063567804 container exec d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:19:04 np0005479822 podman[246733]: 2025-10-10 10:19:04.836944996 +0000 UTC m=+0.085512135 container exec_died d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 06:19:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:05 np0005479822 podman[246800]: 2025-10-10 10:19:05.082806526 +0000 UTC m=+0.062366421 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:19:05 np0005479822 podman[246800]: 2025-10-10 10:19:05.092377299 +0000 UTC m=+0.071937154 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:19:05 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:05 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:05 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 06:19:05 np0005479822 podman[246866]: 2025-10-10 10:19:05.314391155 +0000 UTC m=+0.054756943 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 10 06:19:05 np0005479822 podman[246866]: 2025-10-10 10:19:05.326590309 +0000 UTC m=+0.066956077 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, version=2.2.4, release=1793, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Oct 10 06:19:06 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:06 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:06 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:06 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:19:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:08 np0005479822 nova_compute[235132]: 2025-10-10 10:19:08.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:08.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:09 np0005479822 nova_compute[235132]: 2025-10-10 10:19:09.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:19:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:13 np0005479822 nova_compute[235132]: 2025-10-10 10:19:13.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:14 np0005479822 nova_compute[235132]: 2025-10-10 10:19:14.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:14.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:18 np0005479822 nova_compute[235132]: 2025-10-10 10:19:18.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:19 np0005479822 nova_compute[235132]: 2025-10-10 10:19:19.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:20 np0005479822 podman[247082]: 2025-10-10 10:19:20.002785421 +0000 UTC m=+0.102492593 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:19:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:22 np0005479822 nova_compute[235132]: 2025-10-10 10:19:22.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:22 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:22.532 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:19:22 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:22.534 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:19:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:23 np0005479822 nova_compute[235132]: 2025-10-10 10:19:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:24 np0005479822 nova_compute[235132]: 2025-10-10 10:19:24.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:27 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:27.536 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:19:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:28 np0005479822 nova_compute[235132]: 2025-10-10 10:19:28.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:28.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:29 np0005479822 nova_compute[235132]: 2025-10-10 10:19:29.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:30.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:19:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:19:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:33 np0005479822 nova_compute[235132]: 2025-10-10 10:19:33.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:33 np0005479822 podman[247134]: 2025-10-10 10:19:33.980691589 +0000 UTC m=+0.077464239 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 06:19:33 np0005479822 podman[247133]: 2025-10-10 10:19:33.991650389 +0000 UTC m=+0.084762529 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:19:34 np0005479822 podman[247135]: 2025-10-10 10:19:34.113898801 +0000 UTC m=+0.203619318 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:19:34 np0005479822 nova_compute[235132]: 2025-10-10 10:19:34.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:38 np0005479822 nova_compute[235132]: 2025-10-10 10:19:38.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:39.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:39 np0005479822 nova_compute[235132]: 2025-10-10 10:19:39.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.217 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:19:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.218 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:19:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.218 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:19:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:43 np0005479822 nova_compute[235132]: 2025-10-10 10:19:43.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:44 np0005479822 nova_compute[235132]: 2025-10-10 10:19:44.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:19:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:44.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:19:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:45.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:48 np0005479822 nova_compute[235132]: 2025-10-10 10:19:48.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:49.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:49 np0005479822 nova_compute[235132]: 2025-10-10 10:19:49.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:50 np0005479822 podman[247230]: 2025-10-10 10:19:50.965428359 +0000 UTC m=+0.066752437 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:19:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:51.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:53 np0005479822 nova_compute[235132]: 2025-10-10 10:19:53.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:54 np0005479822 nova_compute[235132]: 2025-10-10 10:19:54.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:55.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:55 np0005479822 nova_compute[235132]: 2025-10-10 10:19:55.320 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:55 np0005479822 nova_compute[235132]: 2025-10-10 10:19:55.321 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:19:56 np0005479822 nova_compute[235132]: 2025-10-10 10:19:56.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:56 np0005479822 nova_compute[235132]: 2025-10-10 10:19:56.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:19:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:19:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:57 np0005479822 nova_compute[235132]: 2025-10-10 10:19:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:57 np0005479822 nova_compute[235132]: 2025-10-10 10:19:57.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:19:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:19:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:19:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:19:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:19:58 np0005479822 nova_compute[235132]: 2025-10-10 10:19:58.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:19:58 np0005479822 nova_compute[235132]: 2025-10-10 10:19:58.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:19:58 np0005479822 nova_compute[235132]: 2025-10-10 10:19:58.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:19:58 np0005479822 nova_compute[235132]: 2025-10-10 10:19:58.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:19:58 np0005479822 nova_compute[235132]: 2025-10-10 10:19:58.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:19:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:19:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:19:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:19:59 np0005479822 nova_compute[235132]: 2025-10-10 10:19:59.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:00 np0005479822 nova_compute[235132]: 2025-10-10 10:20:00.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:00 np0005479822 nova_compute[235132]: 2025-10-10 10:20:00.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:00 np0005479822 ceph-mon[79167]: overall HEALTH_OK
Oct 10 06:20:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:01.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:02 np0005479822 nova_compute[235132]: 2025-10-10 10:20:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:02.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:03.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.075 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.075 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:20:03 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1396745541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.619 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.876 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4924MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.950 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.950 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:20:03 np0005479822 nova_compute[235132]: 2025-10-10 10:20:03.969 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:20:04 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190815280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.436 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.443 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.468 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.470 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.470 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:04.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.832 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.833 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.878 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.956 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.957 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.966 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 10 06:20:04 np0005479822 nova_compute[235132]: 2025-10-10 10:20:04.966 2 INFO nova.compute.claims [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 10 06:20:04 np0005479822 podman[247325]: 2025-10-10 10:20:04.996776328 +0000 UTC m=+0.080625815 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 06:20:05 np0005479822 podman[247326]: 2025-10-10 10:20:05.012364664 +0000 UTC m=+0.087671408 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:20:05 np0005479822 podman[247327]: 2025-10-10 10:20:05.0312278 +0000 UTC m=+0.104779556 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:20:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.068 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:20:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1617462293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.525 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.533 2 DEBUG nova.compute.provider_tree [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.563 2 DEBUG nova.scheduler.client.report [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.603 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.604 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.664 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.665 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.691 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.716 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.838 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.840 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.841 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating image(s)#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.876 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.922 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.960 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.966 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:05 np0005479822 nova_compute[235132]: 2025-10-10 10:20:05.994 2 DEBUG nova.policy [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.054 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.055 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.056 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.056 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.083 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.087 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.380 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.485 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct 10 06:20:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:20:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.623 2 DEBUG nova.objects.instance [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.642 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.642 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Ensure instance console log exists: /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.643 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.644 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.644 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:06 np0005479822 nova_compute[235132]: 2025-10-10 10:20:06.694 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Successfully created port: d7538303-305d-4e01-9d26-cff58aec5656 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 10 06:20:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.691 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Successfully updated port: d7538303-305d-4e01-9d26-cff58aec5656 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.715 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.716 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.717 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.831 2 DEBUG nova.compute.manager [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.832 2 DEBUG nova.compute.manager [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.832 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:20:07 np0005479822 nova_compute[235132]: 2025-10-10 10:20:07.964 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 10 06:20:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:08.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.656 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.695 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.696 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance network_info: |[{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.697 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.697 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.700 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start _get_guest_xml network_info=[{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.707 2 WARNING nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.712 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.713 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.720 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.720 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.721 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.721 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.725 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.725 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.726 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.726 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 10 06:20:08 np0005479822 nova_compute[235132]: 2025-10-10 10:20:08.730 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:09.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:20:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1482921149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.236 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.278 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.283 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 06:20:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189993806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.753 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.755 2 DEBUG nova.virt.libvirt.vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:20:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.755 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.756 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.758 2 DEBUG nova.objects.instance [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.780 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] End _get_guest_xml xml=<domain type="kvm">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <uuid>03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</uuid>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <name>instance-0000000c</name>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <memory>131072</memory>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <vcpu>1</vcpu>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <metadata>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:name>tempest-TestNetworkBasicOps-server-1292020392</nova:name>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:creationTime>2025-10-10 10:20:08</nova:creationTime>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:flavor name="m1.nano">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:memory>128</nova:memory>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:disk>1</nova:disk>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:swap>0</nova:swap>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:ephemeral>0</nova:ephemeral>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:vcpus>1</nova:vcpus>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </nova:flavor>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:owner>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </nova:owner>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <nova:ports>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <nova:port uuid="d7538303-305d-4e01-9d26-cff58aec5656">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        </nova:port>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </nova:ports>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </nova:instance>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </metadata>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <sysinfo type="smbios">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <system>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="manufacturer">RDO</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="product">OpenStack Compute</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="serial">03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="uuid">03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <entry name="family">Virtual Machine</entry>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </system>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </sysinfo>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <os>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <boot dev="hd"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <smbios mode="sysinfo"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </os>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <features>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <acpi/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <apic/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <vmcoreinfo/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </features>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <clock offset="utc">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <timer name="pit" tickpolicy="delay"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <timer name="hpet" present="no"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </clock>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <cpu mode="host-model" match="exact">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <topology sockets="1" cores="1" threads="1"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </cpu>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  <devices>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <disk type="network" device="disk">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <target dev="vda" bus="virtio"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <disk type="network" device="cdrom">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <driver type="raw" cache="none"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <source protocol="rbd" name="vms/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.100" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.102" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <host name="192.168.122.101" port="6789"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </source>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <auth username="openstack">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:        <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      </auth>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <target dev="sda" bus="sata"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </disk>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <interface type="ethernet">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <mac address="fa:16:3e:1a:a6:6c"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <driver name="vhost" rx_queue_size="512"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <mtu size="1442"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <target dev="tapd7538303-30"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </interface>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <serial type="pty">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <log file="/var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/console.log" append="off"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </serial>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <video>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <model type="virtio"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </video>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <input type="tablet" bus="usb"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <rng model="virtio">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <backend model="random">/dev/urandom</backend>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </rng>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="pci" model="pcie-root-port"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <controller type="usb" index="0"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    <memballoon model="virtio">
Oct 10 06:20:09 np0005479822 nova_compute[235132]:      <stats period="10"/>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:    </memballoon>
Oct 10 06:20:09 np0005479822 nova_compute[235132]:  </devices>
Oct 10 06:20:09 np0005479822 nova_compute[235132]: </domain>
Oct 10 06:20:09 np0005479822 nova_compute[235132]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.782 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Preparing to wait for external event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.782 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.783 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.783 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.785 2 DEBUG nova.virt.libvirt.vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:20:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.785 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.786 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.787 2 DEBUG os_vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.800 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7538303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7538303-30, col_values=(('external_ids', {'iface-id': 'd7538303-305d-4e01-9d26-cff58aec5656', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:a6:6c', 'vm-uuid': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:09 np0005479822 NetworkManager[44982]: <info>  [1760091609.8058] manager: (tapd7538303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.818 2 INFO os_vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30')#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.885 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.886 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.886 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:1a:a6:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.887 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Using config drive#033[00m
Oct 10 06:20:09 np0005479822 nova_compute[235132]: 2025-10-10 10:20:09.925 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:11 np0005479822 nova_compute[235132]: 2025-10-10 10:20:11.064 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:20:11 np0005479822 nova_compute[235132]: 2025-10-10 10:20:11.065 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:20:11 np0005479822 nova_compute[235132]: 2025-10-10 10:20:11.089 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:20:11 np0005479822 nova_compute[235132]: 2025-10-10 10:20:11.978 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating config drive at /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config#033[00m
Oct 10 06:20:11 np0005479822 nova_compute[235132]: 2025-10-10 10:20:11.986 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0j9c2po execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.132 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0j9c2po" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.183 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.188 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.359 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.361 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deleting local config drive /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config because it was imported into RBD.#033[00m
Oct 10 06:20:12 np0005479822 systemd[1]: Starting libvirt secret daemon...
Oct 10 06:20:12 np0005479822 systemd[1]: Started libvirt secret daemon.
Oct 10 06:20:12 np0005479822 kernel: tapd7538303-30: entered promiscuous mode
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.4950] manager: (tapd7538303-30): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 10 06:20:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:12Z|00088|binding|INFO|Claiming lport d7538303-305d-4e01-9d26-cff58aec5656 for this chassis.
Oct 10 06:20:12 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:12Z|00089|binding|INFO|d7538303-305d-4e01-9d26-cff58aec5656: Claiming fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.5188] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.5201] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.525 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a6:6c 10.100.0.12'], port_security=['fa:16:3e:1a:a6:6c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1cafd03-311e-4cea-ac47-0377bdc1af9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc36a9e4-a12c-4b9d-8968-49f72bde3476, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=d7538303-305d-4e01-9d26-cff58aec5656) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.527 141156 INFO neutron.agent.ovn.metadata.agent [-] Port d7538303-305d-4e01-9d26-cff58aec5656 in datapath fb3e50c5-fe48-4113-87d7-4e11945ac752 bound to our chassis#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.528 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb3e50c5-fe48-4113-87d7-4e11945ac752#033[00m
Oct 10 06:20:12 np0005479822 systemd-machined[191637]: New machine qemu-5-instance-0000000c.
Oct 10 06:20:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.547 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[57aac461-35a0-4c8d-9727-6a3ec2540824]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.549 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb3e50c5-f1 in ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.552 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb3e50c5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.553 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[578ab604-4bed-458e-bfd0-037968e669a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.554 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[58740491-ec9c-4557-829c-213322629d15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.569 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[41cebd4c-ef06-4878-974e-47d46f9dafc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.593 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[208e6a63-48fa-4952-8290-4d26f2995491]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:12Z|00090|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 ovn-installed in OVS
Oct 10 06:20:12 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:12Z|00091|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 up in Southbound
Oct 10 06:20:12 np0005479822 systemd-udevd[247800]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.6326] device (tapd7538303-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.6335] device (tapd7538303-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.646 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[9371bb70-d364-4690-a2b3-2c47b59cc8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.6566] manager: (tapfb3e50c5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.655 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4917589c-28e7-4cf8-a5d0-41d08b52b84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.692 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[e47b3dcd-a0e5-4135-9e77-4113a3b2ce84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.696 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a1318f62-2400-44f1-a651-ab02dedfd5fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.7219] device (tapfb3e50c5-f0): carrier: link connected
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.731 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[41d567ed-a799-4095-a866-de0b98bd0287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.760 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ae8212-e336-4d1f-9153-80035dde9613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3e50c5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:c3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452643, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247832, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.773 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cca277-1098-41c6-8086-cef2e2689343]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:c3b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452643, 'tstamp': 452643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247834, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.788 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f8d056-774c-47bd-8e14-13af366a67e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3e50c5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:c3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452643, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247835, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.820 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c0874b5a-4e35-4e62-aabd-ab86bbfff924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.893 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6cab77e1-f00a-4d89-aec2-cdcb69472837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.895 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3e50c5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.895 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.896 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb3e50c5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 kernel: tapfb3e50c5-f0: entered promiscuous mode
Oct 10 06:20:12 np0005479822 NetworkManager[44982]: <info>  [1760091612.8984] manager: (tapfb3e50c5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.902 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb3e50c5-f0, col_values=(('external_ids', {'iface-id': '50744b55-fb9e-4bc1-a3e6-4ad27846c672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:12 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:12Z|00092|binding|INFO|Releasing lport 50744b55-fb9e-4bc1-a3e6-4ad27846c672 from this chassis (sb_readonly=0)
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.908 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.909 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e01361eb-6002-4bc2-ab8e-012405f7134a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.910 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: global
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    log         /dev/log local0 debug
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    log-tag     haproxy-metadata-proxy-fb3e50c5-fe48-4113-87d7-4e11945ac752
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    user        root
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    group       root
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    maxconn     1024
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    pidfile     /var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    daemon
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: defaults
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    log global
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    mode http
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    option httplog
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    option dontlognull
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    option http-server-close
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    option forwardfor
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    retries                 3
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    timeout http-request    30s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    timeout connect         30s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    timeout client          32s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    timeout server          32s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    timeout http-keep-alive 30s
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: listen listener
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    bind 169.254.169.254:80
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    server metadata /var/lib/neutron/metadata_proxy
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]:    http-request add-header X-OVN-Network-ID fb3e50c5-fe48-4113-87d7-4e11945ac752
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 10 06:20:12 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.911 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'env', 'PROCESS_TAG=haproxy-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb3e50c5-fe48-4113-87d7-4e11945ac752.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 10 06:20:12 np0005479822 nova_compute[235132]: 2025-10-10 10:20:12.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:13.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.168 2 DEBUG nova.compute.manager [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.170 2 DEBUG nova.compute.manager [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Processing event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 10 06:20:13 np0005479822 podman[247921]: 2025-10-10 10:20:13.290934007 +0000 UTC m=+0.059018435 container create c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:13 np0005479822 systemd[1]: Started libpod-conmon-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope.
Oct 10 06:20:13 np0005479822 podman[247921]: 2025-10-10 10:20:13.267679271 +0000 UTC m=+0.035763729 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 06:20:13 np0005479822 systemd[1]: Started libcrun container.
Oct 10 06:20:13 np0005479822 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f376d2e93cbfc6e1e776215b96aff2cab5e98ad5b3c1c6a6ebb79786c35344/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 06:20:13 np0005479822 podman[247921]: 2025-10-10 10:20:13.384065893 +0000 UTC m=+0.152150321 container init c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 06:20:13 np0005479822 podman[247921]: 2025-10-10 10:20:13.389105721 +0000 UTC m=+0.157190149 container start c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:20:13 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : New worker (247943) forked
Oct 10 06:20:13 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : Loading success.
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.487 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.487 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4864576, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.488 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Started (Lifecycle Event)#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.490 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.494 2 INFO nova.virt.libvirt.driver [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance spawned successfully.#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.494 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.518 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.524 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.529 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.529 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.530 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.530 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.531 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.531 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.568 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.568 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4865792, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.568 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Paused (Lifecycle Event)#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.605 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.609 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4902136, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.610 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Resumed (Lifecycle Event)#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.630 2 INFO nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.630 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.641 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.644 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.681 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.716 2 INFO nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 8.78 seconds to build instance.#033[00m
Oct 10 06:20:13 np0005479822 nova_compute[235132]: 2025-10-10 10:20:13.740 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:14.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:14 np0005479822 nova_compute[235132]: 2025-10-10 10:20:14.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.265 2 DEBUG nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.265 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:20:15 np0005479822 nova_compute[235132]: 2025-10-10 10:20:15.266 2 WARNING nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received unexpected event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with vm_state active and task_state None.#033[00m
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:20:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:17.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:17 np0005479822 ceph-mon[79167]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 10 06:20:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:18 np0005479822 nova_compute[235132]: 2025-10-10 10:20:18.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:19.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:19 np0005479822 nova_compute[235132]: 2025-10-10 10:20:19.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:21 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:22 np0005479822 podman[248006]: 2025-10-10 10:20:22.009867903 +0000 UTC m=+0.096682555 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 06:20:22 np0005479822 nova_compute[235132]: 2025-10-10 10:20:22.108 2 DEBUG nova.compute.manager [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:22 np0005479822 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG nova.compute.manager [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:20:22 np0005479822 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:20:22 np0005479822 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:20:22 np0005479822 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:20:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:23 np0005479822 nova_compute[235132]: 2025-10-10 10:20:23.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:23 np0005479822 nova_compute[235132]: 2025-10-10 10:20:23.977 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:20:23 np0005479822 nova_compute[235132]: 2025-10-10 10:20:23.977 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:20:24 np0005479822 nova_compute[235132]: 2025-10-10 10:20:24.001 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:20:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:24 np0005479822 nova_compute[235132]: 2025-10-10 10:20:24.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:26 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:26Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 06:20:26 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:26Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 06:20:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:28 np0005479822 nova_compute[235132]: 2025-10-10 10:20:28.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:29 np0005479822 nova_compute[235132]: 2025-10-10 10:20:29.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:20:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:20:32 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:20:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:33.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:33 np0005479822 nova_compute[235132]: 2025-10-10 10:20:33.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:34 np0005479822 nova_compute[235132]: 2025-10-10 10:20:34.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:35 np0005479822 podman[248034]: 2025-10-10 10:20:35.945520514 +0000 UTC m=+0.052423234 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 06:20:35 np0005479822 podman[248035]: 2025-10-10 10:20:35.966360294 +0000 UTC m=+0.067596620 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:20:36 np0005479822 podman[248036]: 2025-10-10 10:20:36.000680413 +0000 UTC m=+0.094651060 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:20:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:37.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:38 np0005479822 nova_compute[235132]: 2025-10-10 10:20:38.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:38 np0005479822 nova_compute[235132]: 2025-10-10 10:20:38.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:38 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:38.890 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:20:38 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:38.892 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:20:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:39.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:39 np0005479822 nova_compute[235132]: 2025-10-10 10:20:39.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:41.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.219 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.219 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.220 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:43.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:43 np0005479822 nova_compute[235132]: 2025-10-10 10:20:43.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:43 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:43.894 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:44 np0005479822 nova_compute[235132]: 2025-10-10 10:20:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.109 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.110 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.111 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.112 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.112 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.115 2 INFO nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Terminating instance#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.117 2 DEBUG nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 10 06:20:45 np0005479822 kernel: tapd7538303-30 (unregistering): left promiscuous mode
Oct 10 06:20:45 np0005479822 NetworkManager[44982]: <info>  [1760091645.1752] device (tapd7538303-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 06:20:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:45Z|00093|binding|INFO|Releasing lport d7538303-305d-4e01-9d26-cff58aec5656 from this chassis (sb_readonly=0)
Oct 10 06:20:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:45Z|00094|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 down in Southbound
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 ovn_controller[131749]: 2025-10-10T10:20:45Z|00095|binding|INFO|Removing iface tapd7538303-30 ovn-installed in OVS
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.200 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a6:6c 10.100.0.12'], port_security=['fa:16:3e:1a:a6:6c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1cafd03-311e-4cea-ac47-0377bdc1af9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc36a9e4-a12c-4b9d-8968-49f72bde3476, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=d7538303-305d-4e01-9d26-cff58aec5656) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.203 141156 INFO neutron.agent.ovn.metadata.agent [-] Port d7538303-305d-4e01-9d26-cff58aec5656 in datapath fb3e50c5-fe48-4113-87d7-4e11945ac752 unbound from our chassis#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.205 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb3e50c5-fe48-4113-87d7-4e11945ac752, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.207 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4d70a234-9896-45b0-b4f6-5f91b0a0fbfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.208 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 namespace which is not needed anymore#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 10 06:20:45 np0005479822 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 14.911s CPU time.
Oct 10 06:20:45 np0005479822 systemd-machined[191637]: Machine qemu-5-instance-0000000c terminated.
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : haproxy version is 2.8.14-c23fe91
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : path to executable is /usr/sbin/haproxy
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : Exiting Master process...
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : Exiting Master process...
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [ALERT]    (247941) : Current worker (247943) exited with code 143 (Terminated)
Oct 10 06:20:45 np0005479822 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : All workers exited. Exiting... (0)
Oct 10 06:20:45 np0005479822 systemd[1]: libpod-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope: Deactivated successfully.
Oct 10 06:20:45 np0005479822 podman[248152]: 2025-10-10 10:20:45.35417891 +0000 UTC m=+0.048030284 container died c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.361 2 INFO nova.virt.libvirt.driver [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance destroyed successfully.#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.362 2 DEBUG nova.objects.instance [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 10 06:20:45 np0005479822 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6-userdata-shm.mount: Deactivated successfully.
Oct 10 06:20:45 np0005479822 systemd[1]: var-lib-containers-storage-overlay-a6f376d2e93cbfc6e1e776215b96aff2cab5e98ad5b3c1c6a6ebb79786c35344-merged.mount: Deactivated successfully.
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.388 2 DEBUG nova.virt.libvirt.vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:20:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:20:13Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.389 2 DEBUG nova.network.os_vif_util [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.390 2 DEBUG nova.network.os_vif_util [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.391 2 DEBUG os_vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7538303-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:45 np0005479822 podman[248152]: 2025-10-10 10:20:45.397013401 +0000 UTC m=+0.090864715 container cleanup c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.399 2 INFO os_vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30')#033[00m
Oct 10 06:20:45 np0005479822 systemd[1]: libpod-conmon-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope: Deactivated successfully.
Oct 10 06:20:45 np0005479822 podman[248197]: 2025-10-10 10:20:45.470599723 +0000 UTC m=+0.048118686 container remove c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.479 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6292a6a4-2b17-43a6-8b98-64c96fb7abc3]: (4, ('Fri Oct 10 10:20:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 (c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6)\nc7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6\nFri Oct 10 10:20:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 (c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6)\nc7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.482 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[37d2d6fc-fdfc-4d09-9d96-8ae2217d16c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.484 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3e50c5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 kernel: tapfb3e50c5-f0: left promiscuous mode
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.504 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf28aaf-ccef-4881-9168-6dc930b2a110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.529 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4018ea31-e305-42c4-b555-a5aa86b59d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.531 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1516dc31-a540-4579-a94c-b34dd3729b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.552 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[59272119-2324-4e5e-b686-5b63e35b923b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452634, 'reachable_time': 15143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248224, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.555 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 10 06:20:45 np0005479822 systemd[1]: run-netns-ovnmeta\x2dfb3e50c5\x2dfe48\x2d4113\x2d87d7\x2d4e11945ac752.mount: Deactivated successfully.
Oct 10 06:20:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.555 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3e8861-4ade-4ff1-8ceb-95627cd874e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.581 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.582 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.582 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.583 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.583 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.584 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.776 2 DEBUG nova.compute.manager [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.777 2 DEBUG nova.compute.manager [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.778 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.779 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.779 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.825 2 INFO nova.virt.libvirt.driver [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deleting instance files /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_del#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.827 2 INFO nova.virt.libvirt.driver [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deletion of /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_del complete#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.884 2 INFO nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.885 2 DEBUG oslo.service.loopingcall [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.886 2 DEBUG nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 10 06:20:45 np0005479822 nova_compute[235132]: 2025-10-10 10:20:45.886 2 DEBUG nova.network.neutron [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 10 06:20:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.218 2 DEBUG nova.network.neutron [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.240 2 INFO nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.289 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.290 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.346 2 DEBUG oslo_concurrency.processutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:20:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.710 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.711 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.713 2 WARNING nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received unexpected event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with vm_state deleted and task_state None.#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.713 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-deleted-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 10 06:20:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:20:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2924327785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.818 2 DEBUG oslo_concurrency.processutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.825 2 DEBUG nova.compute.provider_tree [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.842 2 DEBUG nova.scheduler.client.report [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.865 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.892 2 INFO nova.scheduler.client.report [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3#033[00m
Oct 10 06:20:47 np0005479822 nova_compute[235132]: 2025-10-10 10:20:47.966 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:20:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:48 np0005479822 nova_compute[235132]: 2025-10-10 10:20:48.205 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 10 06:20:48 np0005479822 nova_compute[235132]: 2025-10-10 10:20:48.206 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 10 06:20:48 np0005479822 nova_compute[235132]: 2025-10-10 10:20:48.230 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 10 06:20:48 np0005479822 nova_compute[235132]: 2025-10-10 10:20:48.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:20:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:20:50 np0005479822 nova_compute[235132]: 2025-10-10 10:20:50.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:52 np0005479822 podman[248251]: 2025-10-10 10:20:52.993734853 +0000 UTC m=+0.092925722 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 06:20:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:53 np0005479822 nova_compute[235132]: 2025-10-10 10:20:53.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:55 np0005479822 nova_compute[235132]: 2025-10-10 10:20:55.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:20:57 np0005479822 nova_compute[235132]: 2025-10-10 10:20:57.471 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:57 np0005479822 nova_compute[235132]: 2025-10-10 10:20:57.472 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:57 np0005479822 nova_compute[235132]: 2025-10-10 10:20:57.472 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:20:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:20:57 np0005479822 nova_compute[235132]: 2025-10-10 10:20:57.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:20:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:20:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:20:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:20:58 np0005479822 nova_compute[235132]: 2025-10-10 10:20:58.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:58 np0005479822 nova_compute[235132]: 2025-10-10 10:20:58.041 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:58 np0005479822 nova_compute[235132]: 2025-10-10 10:20:58.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:58 np0005479822 nova_compute[235132]: 2025-10-10 10:20:58.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:20:58 np0005479822 nova_compute[235132]: 2025-10-10 10:20:58.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:20:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:20:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:20:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:20:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:20:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:59.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.063 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091645.3541813, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.355 2 INFO nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Stopped (Lifecycle Event)#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.391 2 DEBUG nova.compute.manager [None req-d491c3e9-5712-4726-b2bf-4a5472c97a1e - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 10 06:21:00 np0005479822 nova_compute[235132]: 2025-10-10 10:21:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:00.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:01 np0005479822 nova_compute[235132]: 2025-10-10 10:21:01.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:02 np0005479822 nova_compute[235132]: 2025-10-10 10:21:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:02.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:03 np0005479822 nova_compute[235132]: 2025-10-10 10:21:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:03 np0005479822 nova_compute[235132]: 2025-10-10 10:21:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 06:21:03 np0005479822 nova_compute[235132]: 2025-10-10 10:21:03.074 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 06:21:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:03 np0005479822 nova_compute[235132]: 2025-10-10 10:21:03.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.075 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.103 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.105 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:21:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:21:04 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3705335326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.667 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.838 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4936MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.935 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:21:04 np0005479822 nova_compute[235132]: 2025-10-10 10:21:04.935 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.002 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.096 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.097 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.119 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 06:21:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.155 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.178 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:21:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/127293249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.638 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.646 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.675 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.699 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.700 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:21:05 np0005479822 nova_compute[235132]: 2025-10-10 10:21:05.701 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:06.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:06 np0005479822 podman[248349]: 2025-10-10 10:21:06.965991467 +0000 UTC m=+0.070626192 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 06:21:06 np0005479822 podman[248350]: 2025-10-10 10:21:06.980301989 +0000 UTC m=+0.076520444 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 06:21:07 np0005479822 podman[248351]: 2025-10-10 10:21:07.002675641 +0000 UTC m=+0.103789390 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:21:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:08 np0005479822 nova_compute[235132]: 2025-10-10 10:21:08.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:08.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:10 np0005479822 nova_compute[235132]: 2025-10-10 10:21:10.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:13 np0005479822 nova_compute[235132]: 2025-10-10 10:21:13.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:15 np0005479822 nova_compute[235132]: 2025-10-10 10:21:15.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:16.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:18 np0005479822 nova_compute[235132]: 2025-10-10 10:21:18.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:20 np0005479822 nova_compute[235132]: 2025-10-10 10:21:20.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:21:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:21:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:21:22 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:21:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:23 np0005479822 nova_compute[235132]: 2025-10-10 10:21:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:23 np0005479822 podman[248528]: 2025-10-10 10:21:23.998075193 +0000 UTC m=+0.092107850 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:21:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:24.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:25 np0005479822 nova_compute[235132]: 2025-10-10 10:21:25.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:27 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:21:27 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:21:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:28 np0005479822 nova_compute[235132]: 2025-10-10 10:21:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:28.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:30 np0005479822 nova_compute[235132]: 2025-10-10 10:21:30.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:31.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:33 np0005479822 nova_compute[235132]: 2025-10-10 10:21:33.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.036547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694036608, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2363, "num_deletes": 251, "total_data_size": 6358855, "memory_usage": 6441800, "flush_reason": "Manual Compaction"}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694060426, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4092731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31199, "largest_seqno": 33557, "table_properties": {"data_size": 4083132, "index_size": 6029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20048, "raw_average_key_size": 20, "raw_value_size": 4063987, "raw_average_value_size": 4155, "num_data_blocks": 259, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091492, "oldest_key_time": 1760091492, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 24106 microseconds, and 16293 cpu microseconds.
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060643) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4092731 bytes OK
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060716) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062511) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062530) EVENT_LOG_v1 {"time_micros": 1760091694062524, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6348327, prev total WAL file size 6348327, number of live WAL files 2.
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.064996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3996KB)], [60(11MB)]
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694065089, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16141638, "oldest_snapshot_seqno": -1}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6211 keys, 14033198 bytes, temperature: kUnknown
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694146245, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14033198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992624, "index_size": 23952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 159069, "raw_average_key_size": 25, "raw_value_size": 13881683, "raw_average_value_size": 2235, "num_data_blocks": 964, "num_entries": 6211, "num_filter_entries": 6211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.146813) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14033198 bytes
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.148386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.3 rd, 172.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.5 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 6732, records dropped: 521 output_compression: NoCompression
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.148420) EVENT_LOG_v1 {"time_micros": 1760091694148404, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81388, "compaction_time_cpu_micros": 52495, "output_level": 6, "num_output_files": 1, "total_output_size": 14033198, "num_input_records": 6732, "num_output_records": 6211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694150764, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694155677, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.064842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:21:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:35 np0005479822 ovn_controller[131749]: 2025-10-10T10:21:35Z|00096|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 10 06:21:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:35.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:35 np0005479822 nova_compute[235132]: 2025-10-10 10:21:35.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:37.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:37 np0005479822 podman[248586]: 2025-10-10 10:21:37.994314651 +0000 UTC m=+0.084370937 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 10 06:21:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:38 np0005479822 podman[248587]: 2025-10-10 10:21:38.010498584 +0000 UTC m=+0.093003213 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 06:21:38 np0005479822 podman[248588]: 2025-10-10 10:21:38.024903488 +0000 UTC m=+0.113967607 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:21:38 np0005479822 nova_compute[235132]: 2025-10-10 10:21:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:39.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:40 np0005479822 nova_compute[235132]: 2025-10-10 10:21:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.220 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:21:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.221 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:21:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.221 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:21:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:42.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:43 np0005479822 nova_compute[235132]: 2025-10-10 10:21:43.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000081s ======
Oct 10 06:21:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:44.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Oct 10 06:21:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:45 np0005479822 nova_compute[235132]: 2025-10-10 10:21:45.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:45 np0005479822 nova_compute[235132]: 2025-10-10 10:21:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.836 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:21:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.837 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:21:45 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.839 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:21:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:48 np0005479822 nova_compute[235132]: 2025-10-10 10:21:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:49.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:50 np0005479822 nova_compute[235132]: 2025-10-10 10:21:50.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:53 np0005479822 nova_compute[235132]: 2025-10-10 10:21:53.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:54 np0005479822 podman[248689]: 2025-10-10 10:21:54.976900224 +0000 UTC m=+0.068162605 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:21:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:55 np0005479822 nova_compute[235132]: 2025-10-10 10:21:55.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:21:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:21:57 np0005479822 nova_compute[235132]: 2025-10-10 10:21:57.686 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:57 np0005479822 nova_compute[235132]: 2025-10-10 10:21:57.687 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:21:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:21:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:21:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:21:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:21:58 np0005479822 nova_compute[235132]: 2025-10-10 10:21:58.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:21:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:21:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:21:59 np0005479822 nova_compute[235132]: 2025-10-10 10:21:59.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:59 np0005479822 nova_compute[235132]: 2025-10-10 10:21:59.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:21:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:21:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:21:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:00 np0005479822 nova_compute[235132]: 2025-10-10 10:22:00.041 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:00 np0005479822 nova_compute[235132]: 2025-10-10 10:22:00.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:00 np0005479822 nova_compute[235132]: 2025-10-10 10:22:00.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:01 np0005479822 nova_compute[235132]: 2025-10-10 10:22:01.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:01 np0005479822 nova_compute[235132]: 2025-10-10 10:22:01.069 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:01 np0005479822 nova_compute[235132]: 2025-10-10 10:22:01.069 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:22:01 np0005479822 nova_compute[235132]: 2025-10-10 10:22:01.070 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:22:01 np0005479822 nova_compute[235132]: 2025-10-10 10:22:01.095 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:22:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:02 np0005479822 nova_compute[235132]: 2025-10-10 10:22:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:22:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:22:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:03 np0005479822 nova_compute[235132]: 2025-10-10 10:22:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:03.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:03 np0005479822 nova_compute[235132]: 2025-10-10 10:22:03.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:05.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:05 np0005479822 nova_compute[235132]: 2025-10-10 10:22:05.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.097 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.098 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.099 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.099 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.100 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:22:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:22:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1956938146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.603 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:22:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.792 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4907MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.869 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.869 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:22:06 np0005479822 nova_compute[235132]: 2025-10-10 10:22:06.886 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:22:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:22:07 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3246728595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:22:07 np0005479822 nova_compute[235132]: 2025-10-10 10:22:07.349 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:22:07 np0005479822 nova_compute[235132]: 2025-10-10 10:22:07.355 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:22:07 np0005479822 nova_compute[235132]: 2025-10-10 10:22:07.371 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:22:07 np0005479822 nova_compute[235132]: 2025-10-10 10:22:07.373 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:22:07 np0005479822 nova_compute[235132]: 2025-10-10 10:22:07.373 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:22:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:08 np0005479822 nova_compute[235132]: 2025-10-10 10:22:08.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:09 np0005479822 podman[248785]: 2025-10-10 10:22:09.01156206 +0000 UTC m=+0.103646985 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 10 06:22:09 np0005479822 podman[248786]: 2025-10-10 10:22:09.014384407 +0000 UTC m=+0.098858035 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 10 06:22:09 np0005479822 podman[248787]: 2025-10-10 10:22:09.065292239 +0000 UTC m=+0.146288781 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 06:22:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:10 np0005479822 nova_compute[235132]: 2025-10-10 10:22:10.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:12 np0005479822 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 06:22:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:13 np0005479822 nova_compute[235132]: 2025-10-10 10:22:13.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:15 np0005479822 nova_compute[235132]: 2025-10-10 10:22:15.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:18 np0005479822 nova_compute[235132]: 2025-10-10 10:22:18.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:19.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:20 np0005479822 nova_compute[235132]: 2025-10-10 10:22:20.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:23 np0005479822 nova_compute[235132]: 2025-10-10 10:22:23.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:24.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:25 np0005479822 nova_compute[235132]: 2025-10-10 10:22:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:25 np0005479822 podman[248883]: 2025-10-10 10:22:25.970262248 +0000 UTC m=+0.070078757 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:22:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:28 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:22:28 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:22:28 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:22:28 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:22:28 np0005479822 nova_compute[235132]: 2025-10-10 10:22:28.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:29.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.461045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749461090, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 773, "num_deletes": 250, "total_data_size": 1452613, "memory_usage": 1476256, "flush_reason": "Manual Compaction"}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749471472, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 956064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33562, "largest_seqno": 34330, "table_properties": {"data_size": 952472, "index_size": 1436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7475, "raw_average_key_size": 17, "raw_value_size": 945175, "raw_average_value_size": 2172, "num_data_blocks": 64, "num_entries": 435, "num_filter_entries": 435, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091695, "oldest_key_time": 1760091695, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10459 microseconds, and 6351 cpu microseconds.
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.471507) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 956064 bytes OK
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.471524) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472587) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472597) EVENT_LOG_v1 {"time_micros": 1760091749472593, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1448524, prev total WAL file size 1448524, number of live WAL files 2.
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.473124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(933KB)], [63(13MB)]
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749473189, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 14989262, "oldest_snapshot_seqno": -1}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6134 keys, 13815009 bytes, temperature: kUnknown
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749549702, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 13815009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13774792, "index_size": 23787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 159179, "raw_average_key_size": 25, "raw_value_size": 13664907, "raw_average_value_size": 2227, "num_data_blocks": 942, "num_entries": 6134, "num_filter_entries": 6134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.550045) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 13815009 bytes
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.553736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.6 rd, 180.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(30.1) write-amplify(14.4) OK, records in: 6646, records dropped: 512 output_compression: NoCompression
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.553796) EVENT_LOG_v1 {"time_micros": 1760091749553779, "job": 38, "event": "compaction_finished", "compaction_time_micros": 76630, "compaction_time_cpu_micros": 54152, "output_level": 6, "num_output_files": 1, "total_output_size": 13815009, "num_input_records": 6646, "num_output_records": 6134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749555047, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749560015, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.473009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:29 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:22:30 np0005479822 nova_compute[235132]: 2025-10-10 10:22:30.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:31.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:22:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:22:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:33 np0005479822 nova_compute[235132]: 2025-10-10 10:22:33.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:34.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:35.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:35 np0005479822 nova_compute[235132]: 2025-10-10 10:22:35.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:38 np0005479822 nova_compute[235132]: 2025-10-10 10:22:38.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:38.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:39 np0005479822 podman[249016]: 2025-10-10 10:22:39.98923917 +0000 UTC m=+0.078599000 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 10 06:22:40 np0005479822 podman[249017]: 2025-10-10 10:22:40.022299283 +0000 UTC m=+0.109210247 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:22:40 np0005479822 podman[249015]: 2025-10-10 10:22:40.02439262 +0000 UTC m=+0.112761304 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:22:40 np0005479822 nova_compute[235132]: 2025-10-10 10:22:40.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.222 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:22:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.222 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:22:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:22:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:43.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:43 np0005479822 nova_compute[235132]: 2025-10-10 10:22:43.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:43 np0005479822 systemd-logind[789]: New session 57 of user zuul.
Oct 10 06:22:43 np0005479822 systemd[1]: Started Session 57 of User zuul.
Oct 10 06:22:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:44.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:45 np0005479822 nova_compute[235132]: 2025-10-10 10:22:45.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:46.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 06:22:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1845402693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 06:22:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:48 np0005479822 nova_compute[235132]: 2025-10-10 10:22:48.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:22:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:22:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:50 np0005479822 nova_compute[235132]: 2025-10-10 10:22:50.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:22:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:22:52 np0005479822 ovs-vsctl[249481]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 06:22:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:22:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:22:53 np0005479822 nova_compute[235132]: 2025-10-10 10:22:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:54 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 06:22:54 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 06:22:54 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 06:22:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:54 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: cache status {prefix=cache status} (starting...)
Oct 10 06:22:54 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:54 np0005479822 lvm[249791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 06:22:54 np0005479822 lvm[249791]: VG ceph_vg0 finished
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: client ls {prefix=client ls} (starting...)
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:55 np0005479822 kernel: block loop3: the capability attribute has been deprecated.
Oct 10 06:22:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:22:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:55.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:55 np0005479822 nova_compute[235132]: 2025-10-10 10:22:55.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 06:22:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/588890842' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 06:22:55 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 06:22:56 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2495586288' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 06:22:56 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037339908' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 06:22:56 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:56 np0005479822 podman[250116]: 2025-10-10 10:22:56.964157101 +0000 UTC m=+0.070492018 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2477553374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 06:22:57 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: ops {prefix=ops} (starting...)
Oct 10 06:22:57 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2039649700' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 06:22:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 06:22:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2988512648' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 06:22:57 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: session ls {prefix=session ls} (starting...)
Oct 10 06:22:57 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:22:57 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: status {prefix=status} (starting...)
Oct 10 06:22:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:22:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:22:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:22:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3421568049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 06:22:58 np0005479822 nova_compute[235132]: 2025-10-10 10:22:58.374 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:58 np0005479822 nova_compute[235132]: 2025-10-10 10:22:58.375 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1035999956' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 06:22:58 np0005479822 nova_compute[235132]: 2025-10-10 10:22:58.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1488282981' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 06:22:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3421045328' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 06:22:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3233252603' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 06:22:59 np0005479822 nova_compute[235132]: 2025-10-10 10:22:59.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:22:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:22:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:22:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3055579754' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3635718541' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 06:22:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/571390437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 06:23:00 np0005479822 nova_compute[235132]: 2025-10-10 10:23:00.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 06:23:00 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/175764086' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 06:23:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 06:23:00 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2561366827' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 06:23:00 np0005479822 nova_compute[235132]: 2025-10-10 10:23:00.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:23:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:23:01 np0005479822 nova_compute[235132]: 2025-10-10 10:23:01.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 06:23:01 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1569461498' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 06:23:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:01 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 06:23:01 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579803735' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 4964352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 4964352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09723da40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988ae1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 4956160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 4939776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 4939776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 4931584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 4931584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 4923392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:23:01 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 4923392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 4907008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.927526474s of 13.931305885s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 4898816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984461 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4882432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4882432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 4874240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985382 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 4841472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 4841472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.529447556s of 14.540608406s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 4825088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 4825088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b0987bf860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4800512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4800512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4792320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4792320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 4775936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 4775936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.334420204s of 15.342825890s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986894 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 4751360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 4751360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 4734976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 4734976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4726784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4726784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.439384460s of 15.455260277s, submitted: 4
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4702208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4702208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4694016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4694016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 4685824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 4685824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4669440 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4669440 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4661248 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4661248 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 4644864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 4644864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 4620288 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 4620288 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 4612096 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 4612096 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4595712 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4595712 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4587520 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4587520 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 4562944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 4538368 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 4538368 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 4530176 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 4530176 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0991df680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09722ed20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4513792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4513792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4505600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4505600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4497408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4497408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 70.248580933s of 70.251838684s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4481024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4481024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4464640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4464640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 4456448 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 4456448 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987224 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 4448256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 4448256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.062088966s of 12.070409775s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4431872 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4431872 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4423680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4423680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4407296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4407296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 4399104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 4399104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4382720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4382720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4374528 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4374528 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 4349952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 4349952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4341760 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4341760 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4333568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4333568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4317184 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4317184 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4308992 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4308992 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4300800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4300800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4284416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09722f0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09840c1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4284416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4259840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4259840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4251648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4251648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.489089966s of 57.500850677s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa4000 session 0x55b096d5fa40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cde00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986633 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4243456 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4235264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4235264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4227072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4227072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988145 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4210688 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4210688 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988145 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4202496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029865265s of 11.068701744s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4186112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988277 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4136960 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4136960 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4128768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989657 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4128768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.139680862s of 12.150322914s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4112384 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4153344 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s#012Interval WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4096000 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4096000 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4087808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4087808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4079616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4079616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4071424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4071424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4055040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4055040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4038656 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4038656 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4022272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4022272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4005888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4005888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3997696 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3997696 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3989504 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3989504 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 3964928 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 3964928 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 3956736 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 3956736 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 3940352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 3940352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 3923968 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 3923968 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3915776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3915776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3899392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3899392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b0988aed20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3883008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3883008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3874816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3874816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3866624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3866624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.893539429s of 81.902740479s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989066 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3842048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3842048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3833856 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3833856 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990578 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3817472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3817472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3801088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3792896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.926835060s of 15.939207077s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3784704 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3784704 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3776512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3776512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3768320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3768320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3743744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3743744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3727360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3727360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3719168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3719168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3710976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b098e263c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b096bfb0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3710976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3702784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3694592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3694592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3686400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3686400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.095127106s of 32.099720001s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991499 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [1,1])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3620864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.123404503s of 10.003307343s, submitted: 325
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3497984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0986803c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa4400 session 0x55b0987bd680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.957939148s of 19.074586868s, submitted: 41
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.904156685s of 15.913485527s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b098fe1a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.030792236s of 166.034805298s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0987bf4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b096d5eb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991829 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242959023s of 11.250681877s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991961 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993473 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993341 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104929924s of 12.169629097s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096657800 session 0x55b099008780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09900a1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b096c4b0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b097a14f00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.395177841s of 19.406061172s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992882 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.400293350s of 13.415930748s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09900ab40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.019653320s of 10.023086548s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.912096024s of 12.919400215s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dcbc00 session 0x55b09586e3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09900b680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09900b0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.915779114s of 35.922908783s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995774 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083123207s of 12.089940071s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0990090e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.845153809s of 41.853366852s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b098269000 session 0x55b097a15e00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b0987bf4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e19a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.253569603s of 15.263068199s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3260416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.775625229s of 15.790586472s, submitted: 4
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cda40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0987be5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.357236862s of 20.361238480s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000640 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000049 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.517070770s of 14.537599564s, submitted: 4
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905d2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b096bfb0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e285a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098e29860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.958301544s of 17.961801529s, submitted: 1
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002614 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060987473s of 12.085634232s, submitted: 5
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002023 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3137536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988dd680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b097a69e00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.974418640s of 97.984451294s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001891 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006427 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005836 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.054930687s of 12.073850632s, submitted: 5
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0994fde00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b09900ab40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.182666779s of 16.189365387s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b098fe4000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069c00 session 0x55b098f794a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2670592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918289185s of 12.100981712s, submitted: 367
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b098fe1680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09905d0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905c780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.894775391s of 18.905117035s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005575 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.683311462s of 10.696245193s, submitted: 3
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005707 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.102847099s of 12.118186951s, submitted: 4
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b0986805a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09840d2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.958114624s of 71.965682983s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132945061s of 12.140886307s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b098f8a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988dd680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.335838318s of 16.343191147s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2465792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.721952438s of 15.729380608s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069400 session 0x55b098fe0f00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098f9fc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011972 data_alloc: 218103808 data_used: 282624
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18112512 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 145 ms_handle_reset con 0x55b099069000 session 0x55b0988aeb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18096128 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 15982592 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069400 session 0x55b098f8be00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.096752167s of 14.256991386s, submitted: 46
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083035469s of 12.092510223s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080052 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b09900bc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069800 session 0x55b0990083c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b098e26000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026000 session 0x55b099433680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026400 session 0x55b09722fa40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099027400 session 0x55b098856b40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069000 session 0x55b096c4a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.802989960s of 21.816146851s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104862 data_alloc: 218103808 data_used: 7106560
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026000 session 0x55b098fe0960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 11378688 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026400 session 0x55b0982841e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026800 session 0x55b098e19860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099027400 session 0x55b0988cd860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099069000 session 0x55b098f8b2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdd3000/0x0/0x4ffc00000, data 0x974379/0xa38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 11255808 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb81c000/0x0/0x4ffc00000, data 0xf294c4/0xfee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155058 data_alloc: 218103808 data_used: 7106560
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096e1c960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156283 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 9748480 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188963 data_alloc: 218103808 data_used: 7876608
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.784969330s of 17.929061890s, submitted: 52
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188372 data_alloc: 218103808 data_used: 7876608
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 4276224 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 4218880 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217686 data_alloc: 218103808 data_used: 8949760
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 218103808 data_used: 8953856
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b0991de960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2875392 heap: 107372544 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.675640106s of 26.806079865s, submitted: 44
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b098fe1a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988ae5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068400 session 0x55b09722e960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096bfb860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b099432960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe0b40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098e28960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b0988af680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b099433860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988dcb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327645302s of 10.456887245s, submitted: 32
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295076 data_alloc: 218103808 data_used: 8974336
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 14508032 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361328 data_alloc: 234881024 data_used: 16445440
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364016 data_alloc: 234881024 data_used: 16445440
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515455246s of 12.531072617s, submitted: 5
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 8232960 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bfa000/0x0/0x4ffc00000, data 0x29a6496/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475602 data_alloc: 234881024 data_used: 17408000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 7258112 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bb4000/0x0/0x4ffc00000, data 0x29e3496/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466414 data_alloc: 234881024 data_used: 17408000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 8192000 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086705208s of 10.319118500s, submitted: 125
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b0988afa40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b09638f0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098f9f680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207454 data_alloc: 218103808 data_used: 5505024
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0990081e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905c5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 17301504 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09840d4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b097a68d20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987bfa40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0993723c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936c960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.987216949s of 32.222537994s, submitted: 83
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b096d5ed20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098681860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098e292c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0970df2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0970ded20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b0987bd2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905d860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09723cb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987be5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b098f9e1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104292352 unmapped: 18956288 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 18948096 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153680 data_alloc: 218103808 data_used: 4112384
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687290192s of 14.744665146s, submitted: 17
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154724 data_alloc: 218103808 data_used: 4239360
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159212 data_alloc: 218103808 data_used: 4243456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 16105472 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 16080896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 14680064 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213196 data_alloc: 218103808 data_used: 4591616
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378688812s of 13.587653160s, submitted: 76
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098680960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6400 session 0x55b0994323c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7fe00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.899662018s of 20.906446457s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09723cd20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 18874368 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.475157738s of 10.608925819s, submitted: 41
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142193 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b096c4a5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b0988dcf00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.499574661s of 16.514310837s, submitted: 4
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b096c4af00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096c4a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09936da40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936cf00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b0972781e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47d000/0x0/0x4ffc00000, data 0xd19496/0xddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173157 data_alloc: 218103808 data_used: 3641344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09874a780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 19750912 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 19668992 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.522539139s of 18.680767059s, submitted: 28
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 19152896 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109240320 unmapped: 15065088 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280927 data_alloc: 218103808 data_used: 7426048
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280795 data_alloc: 218103808 data_used: 7426048
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b098fe4000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098f78f00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7400 session 0x55b09936de00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097278d20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16244736 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b09936d4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 11780096 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.515605927s of 18.667829514s, submitted: 60
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0991dfc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7800 session 0x55b098e281e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097a68d20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b0994325a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096ddd4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361043 data_alloc: 234881024 data_used: 10899456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0982843c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094000 session 0x55b096d112c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098284960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b096d7f2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 16498688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420911 data_alloc: 234881024 data_used: 19783680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423343 data_alloc: 234881024 data_used: 20115456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098fe4b40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b099433c20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.933946609s of 17.117507935s, submitted: 45
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 9232384 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454755 data_alloc: 234881024 data_used: 20537344
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464521 data_alloc: 234881024 data_used: 20365312
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b097a154a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0988cd0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.756012917s of 16.126758575s, submitted: 124
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0987be1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09586ef00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09638e3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x9784b9/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 21069824 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s#012Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166769 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096dddc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099008f00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b099009a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.991001129s of 27.170951843s, submitted: 56
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b097a15e00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096c4b860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096c4a5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4ba40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7f2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094400 session 0x55b0994fde00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.066671371s of 19.101375580s, submitted: 6
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 15245312 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 14950400 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 14770176 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.543247223s of 26.644886017s, submitted: 51
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099009680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f792c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098e29680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094c00 session 0x55b09874fe00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095000 session 0x55b096ddd680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332538 data_alloc: 234881024 data_used: 13742080
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fc5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339966 data_alloc: 234881024 data_used: 14827520
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340574 data_alloc: 234881024 data_used: 14888960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.697357178s of 16.766319275s, submitted: 23
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 12836864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388676 data_alloc: 234881024 data_used: 15142912
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 10887168 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9345000/0x0/0x4ffc00000, data 0x1e4a4a6/0x1f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393116 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.087865829s of 19.344846725s, submitted: 102
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.535310745s of 12.545021057s, submitted: 2
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0963a4800 session 0x55b099009860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 12345344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840996742s of 10.006482124s, submitted: 55
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 12279808 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391892 data_alloc: 234881024 data_used: 14974976
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09905c3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0988ddc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b0970df4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 13426688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0994fda40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445354462s of 14.472743034s, submitted: 373
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096ddc3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 17448960 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0987bc000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098681860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b098f794a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f783c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.050231934s of 27.103757858s, submitted: 15
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 24707072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638fe00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0994332c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b099433e00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8000 session 0x55b0994321e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09586ef00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9df6000/0x0/0x4ffc00000, data 0x139f4a6/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263995 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fcd20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 21086208 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336483 data_alloc: 234881024 data_used: 17555456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006184578s of 10.114780426s, submitted: 27
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096c4a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a14b40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 19595264 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8400 session 0x55b097a69860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0982841e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096d5eb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dcd20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09638f0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877738953s of 12.966034889s, submitted: 31
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8800 session 0x55b098f8bc20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096e1da40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098fe41e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 26058752 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0990092c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 26050560 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223186 data_alloc: 218103808 data_used: 7118848
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8c00 session 0x55b09723de00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857e00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638f4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936de00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: mgrc ms_handle_reset ms_handle_reset con 0x55b096656000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: mgrc handle_mgr_configure stats_period=5
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096dcac00 session 0x55b0988cc5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068000 session 0x55b0991deb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237429 data_alloc: 218103808 data_used: 8724480
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a15680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9000 session 0x55b096e1cd20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.847922325s of 13.942553520s, submitted: 33
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 26181632 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4af00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dd4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098f8b2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9800 session 0x55b097a68780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952396393s of 23.063278198s, submitted: 31
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x9784bf/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0991df860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09723cb40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0986803c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9c00 session 0x55b098680000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 26148864 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098681860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098680b40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc1e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09874ab40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 26107904 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 26099712 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 25485312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114622464 unmapped: 24895488 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.799320221s of 20.981313705s, submitted: 25
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 20717568 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa1a5000/0x0/0x4ffc00000, data 0xfef508/0x10b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ec9000/0x0/0x4ffc00000, data 0x12cb508/0x1393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320655 data_alloc: 234881024 data_used: 13844480
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 19365888 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.374687195s of 14.491823196s, submitted: 54
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade000 session 0x55b09874a5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b0990094a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f794a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.981967926s of 26.068605423s, submitted: 26
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098284780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe54a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09723d2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0970df680
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098f8a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223317 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f8b4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226965 data_alloc: 218103808 data_used: 7639040
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237757 data_alloc: 218103808 data_used: 9252864
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933888435s of 16.003017426s, submitted: 22
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19529728 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307679 data_alloc: 218103808 data_used: 9330688
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9c31000/0x0/0x4ffc00000, data 0x15644f8/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 21463040 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 21446656 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe4780
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936c3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936d860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09638f4a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09900ad20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341734 data_alloc: 234881024 data_used: 10129408
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe5a40
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 19619840 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 19513344 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348526 data_alloc: 234881024 data_used: 10899456
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.355663300s of 20.596637726s, submitted: 92
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x1aa64f8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396772 data_alloc: 234881024 data_used: 12251136
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.976144791s of 18.175519943s, submitted: 91
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391364 data_alloc: 234881024 data_used: 12251136
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe41e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b0988cd0e0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 16351232 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0994fcf00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97ee000/0x0/0x4ffc00000, data 0x15974f8/0x165e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327065 data_alloc: 234881024 data_used: 10133504
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b098e29c20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4b860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.100059509s of 34.283664703s, submitted: 59
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09723cd20
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09723c960
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b096d7f2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b096d7e5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7e3c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313977 data_alloc: 218103808 data_used: 7114752
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a5a0
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b0988cd860
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 23453696 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314282 data_alloc: 218103808 data_used: 7118848
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 23429120 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 19677184 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.678905487s of 18.792392731s, submitted: 24
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 10878976 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497028 data_alloc: 234881024 data_used: 19230720
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,1,1])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496100 data_alloc: 234881024 data_used: 19238912
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c19000/0x0/0x4ffc00000, data 0x216d496/0x2233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b09905c000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.617673874s of 10.932350159s, submitted: 136
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b0994fde00
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098fe4000
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 20815872 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 20987904 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 21602304 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:23:01 np0005479822 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3971028922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 06:23:02 np0005479822 nova_compute[235132]: 2025-10-10 10:23:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:02 np0005479822 nova_compute[235132]: 2025-10-10 10:23:02.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:23:02 np0005479822 nova_compute[235132]: 2025-10-10 10:23:02.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:23:02 np0005479822 nova_compute[235132]: 2025-10-10 10:23:02.070 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:23:02 np0005479822 nova_compute[235132]: 2025-10-10 10:23:02.070 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/661113299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:02.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 06:23:02 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3336206771' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 06:23:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:03 np0005479822 nova_compute[235132]: 2025-10-10 10:23:03.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:03 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 10 06:23:03 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3878076498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 06:23:04 np0005479822 nova_compute[235132]: 2025-10-10 10:23:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2189172015' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2057841537' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 10 06:23:04 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3927813275' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 06:23:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:05 np0005479822 nova_compute[235132]: 2025-10-10 10:23:05.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3007931880' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1496720933' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/166812341' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 06:23:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2734253997' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1178817591' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 06:23:05 np0005479822 nova_compute[235132]: 2025-10-10 10:23:05.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 06:23:05 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574716574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3456937404' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/496033978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 06:23:06 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2553354890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 06:23:06 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3152055571' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 06:23:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 06:23:06 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3212453945' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 06:23:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 06:23:07 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1152939535' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 06:23:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 06:23:07 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2749167785' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 06:23:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.067 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:23:08 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010273267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.541 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.700 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.701 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4671MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.786 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.787 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:23:08 np0005479822 nova_compute[235132]: 2025-10-10 10:23:08.811 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:23:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:08 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 06:23:08 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3785517594' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1415916768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:23:09 np0005479822 nova_compute[235132]: 2025-10-10 10:23:09.287 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:23:09 np0005479822 nova_compute[235132]: 2025-10-10 10:23:09.292 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:23:09 np0005479822 nova_compute[235132]: 2025-10-10 10:23:09.330 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:23:09 np0005479822 nova_compute[235132]: 2025-10-10 10:23:09.333 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:23:09 np0005479822 nova_compute[235132]: 2025-10-10 10:23:09.334 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:23:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/892024631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 06:23:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3580709824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/71675879' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 06:23:10 np0005479822 nova_compute[235132]: 2025-10-10 10:23:10.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:23:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:23:10 np0005479822 podman[252224]: 2025-10-10 10:23:10.999415146 +0000 UTC m=+0.084717677 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 10 06:23:11 np0005479822 podman[252223]: 2025-10-10 10:23:11.004163756 +0000 UTC m=+0.105734582 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 06:23:11 np0005479822 podman[252229]: 2025-10-10 10:23:11.028134681 +0000 UTC m=+0.105707901 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 06:23:11 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 06:23:11 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3592385461' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 06:23:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:11.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 06:23:12 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2970845557' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 06:23:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 10 06:23:12 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4049190113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 06:23:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:13 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 10 06:23:13 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2793981116' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 06:23:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:13.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:13 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 10 06:23:13 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223693866' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 06:23:13 np0005479822 nova_compute[235132]: 2025-10-10 10:23:13.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:14 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 06:23:14 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1333227027' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 06:23:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:15.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:15 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 10 06:23:15 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/693351407' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 06:23:15 np0005479822 nova_compute[235132]: 2025-10-10 10:23:15.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:16 np0005479822 ovs-appctl[253402]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 06:23:16 np0005479822 ovs-appctl[253407]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 06:23:16 np0005479822 ovs-appctl[253413]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 06:23:16 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 10 06:23:16 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1989748693' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 06:23:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 10 06:23:17 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1097788986' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 06:23:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:18 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 10 06:23:18 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1339899947' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 06:23:18 np0005479822 nova_compute[235132]: 2025-10-10 10:23:18.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:18 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 10 06:23:18 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/929438571' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 06:23:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:19 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 06:23:19 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1795034779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 06:23:20 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 10 06:23:20 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3929414897' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 06:23:20 np0005479822 nova_compute[235132]: 2025-10-10 10:23:20.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:20 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 10 06:23:20 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/151415482' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 06:23:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:21 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 06:23:21 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2054782978' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 06:23:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 10 06:23:22 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/833543587' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 06:23:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 10 06:23:22 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2593878606' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 06:23:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:23:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:22.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:23:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:23 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 10 06:23:23 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2474021729' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 06:23:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:23.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:23 np0005479822 nova_compute[235132]: 2025-10-10 10:23:23.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:23 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 10 06:23:23 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639796689' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 06:23:24 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 10 06:23:24 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281277725' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 06:23:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:24.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:25 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 10 06:23:25 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3007322319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 06:23:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:25 np0005479822 nova_compute[235132]: 2025-10-10 10:23:25.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2707461924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 10 06:23:26 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4075409164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 06:23:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:27 np0005479822 podman[255470]: 2025-10-10 10:23:27.098199612 +0000 UTC m=+0.091492642 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:23:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:27 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 06:23:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 06:23:27 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3733082418' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 06:23:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:28 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 10 06:23:28 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1570035963' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 06:23:28 np0005479822 nova_compute[235132]: 2025-10-10 10:23:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:29 np0005479822 systemd[1]: Starting Time & Date Service...
Oct 10 06:23:29 np0005479822 systemd[1]: Started Time & Date Service.
Oct 10 06:23:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:29 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 06:23:29 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2650376232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 06:23:30 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 10 06:23:30 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/344713102' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 06:23:30 np0005479822 nova_compute[235132]: 2025-10-10 10:23:30.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:33 np0005479822 nova_compute[235132]: 2025-10-10 10:23:33.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:23:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:23:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:23:33 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:23:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:35 np0005479822 nova_compute[235132]: 2025-10-10 10:23:35.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:37.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:38 np0005479822 nova_compute[235132]: 2025-10-10 10:23:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:23:39 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:23:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:40 np0005479822 nova_compute[235132]: 2025-10-10 10:23:40.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:41 np0005479822 podman[256205]: 2025-10-10 10:23:41.463095863 +0000 UTC m=+0.067024423 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 06:23:41 np0005479822 podman[256206]: 2025-10-10 10:23:41.481429394 +0000 UTC m=+0.079605727 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 06:23:41 np0005479822 podman[256207]: 2025-10-10 10:23:41.51750712 +0000 UTC m=+0.107002826 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 10 06:23:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:23:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:23:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:23:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:42.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:43 np0005479822 nova_compute[235132]: 2025-10-10 10:23:43.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:45 np0005479822 nova_compute[235132]: 2025-10-10 10:23:45.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:46.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:48 np0005479822 nova_compute[235132]: 2025-10-10 10:23:48.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:48.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:49.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:50 np0005479822 nova_compute[235132]: 2025-10-10 10:23:50.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:23:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:23:53 np0005479822 nova_compute[235132]: 2025-10-10 10:23:53.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:54.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:23:55 np0005479822 nova_compute[235132]: 2025-10-10 10:23:55.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:56.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:23:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:23:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:23:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:23:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:23:58 np0005479822 podman[256276]: 2025-10-10 10:23:58.008245693 +0000 UTC m=+0.083606357 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 10 06:23:58 np0005479822 nova_compute[235132]: 2025-10-10 10:23:58.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:23:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:23:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:58.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:23:59 np0005479822 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 06:23:59 np0005479822 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 06:23:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:23:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:23:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:59.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:00 np0005479822 nova_compute[235132]: 2025-10-10 10:24:00.334 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:00 np0005479822 nova_compute[235132]: 2025-10-10 10:24:00.334 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:00 np0005479822 nova_compute[235132]: 2025-10-10 10:24:00.335 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:24:00 np0005479822 nova_compute[235132]: 2025-10-10 10:24:00.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:01 np0005479822 nova_compute[235132]: 2025-10-10 10:24:01.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:02 np0005479822 nova_compute[235132]: 2025-10-10 10:24:02.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:02 np0005479822 nova_compute[235132]: 2025-10-10 10:24:02.059 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:02 np0005479822 nova_compute[235132]: 2025-10-10 10:24:02.059 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:02.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:03 np0005479822 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:03 np0005479822 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:24:03 np0005479822 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:24:03 np0005479822 nova_compute[235132]: 2025-10-10 10:24:03.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:24:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:24:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:24:03 np0005479822 nova_compute[235132]: 2025-10-10 10:24:03.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:04 np0005479822 nova_compute[235132]: 2025-10-10 10:24:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:05 np0005479822 nova_compute[235132]: 2025-10-10 10:24:05.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:06 np0005479822 nova_compute[235132]: 2025-10-10 10:24:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:06.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:08 np0005479822 nova_compute[235132]: 2025-10-10 10:24:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:24:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:08.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:24:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:09 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:24:09 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1433281411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.574 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.773 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.781 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4702MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.781 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.864 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.865 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:24:09 np0005479822 nova_compute[235132]: 2025-10-10 10:24:09.881 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:24:10 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:24:10 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1672674889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.343 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.350 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.369 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.371 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.371 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:24:10 np0005479822 nova_compute[235132]: 2025-10-10 10:24:10.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:11 np0005479822 podman[256376]: 2025-10-10 10:24:11.571783517 +0000 UTC m=+0.072246725 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 06:24:11 np0005479822 podman[256377]: 2025-10-10 10:24:11.581431781 +0000 UTC m=+0.071167627 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 06:24:11 np0005479822 podman[256398]: 2025-10-10 10:24:11.65342983 +0000 UTC m=+0.105497085 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:24:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:12.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:13 np0005479822 nova_compute[235132]: 2025-10-10 10:24:13.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:24:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:24:15 np0005479822 nova_compute[235132]: 2025-10-10 10:24:15.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:16.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:17.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:18 np0005479822 nova_compute[235132]: 2025-10-10 10:24:18.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:18.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:19 np0005479822 systemd-logind[789]: Session 57 logged out. Waiting for processes to exit.
Oct 10 06:24:19 np0005479822 systemd[1]: session-57.scope: Deactivated successfully.
Oct 10 06:24:19 np0005479822 systemd[1]: session-57.scope: Consumed 2min 52.246s CPU time, 650.3M memory peak, read 184.1M from disk, written 128.2M to disk.
Oct 10 06:24:19 np0005479822 systemd-logind[789]: Removed session 57.
Oct 10 06:24:19 np0005479822 systemd-logind[789]: New session 58 of user zuul.
Oct 10 06:24:19 np0005479822 systemd[1]: Started Session 58 of User zuul.
Oct 10 06:24:20 np0005479822 systemd[1]: session-58.scope: Deactivated successfully.
Oct 10 06:24:20 np0005479822 systemd-logind[789]: Session 58 logged out. Waiting for processes to exit.
Oct 10 06:24:20 np0005479822 systemd-logind[789]: Removed session 58.
Oct 10 06:24:20 np0005479822 systemd-logind[789]: New session 59 of user zuul.
Oct 10 06:24:20 np0005479822 systemd[1]: Started Session 59 of User zuul.
Oct 10 06:24:20 np0005479822 systemd[1]: session-59.scope: Deactivated successfully.
Oct 10 06:24:20 np0005479822 systemd-logind[789]: Session 59 logged out. Waiting for processes to exit.
Oct 10 06:24:20 np0005479822 systemd-logind[789]: Removed session 59.
Oct 10 06:24:20 np0005479822 nova_compute[235132]: 2025-10-10 10:24:20.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:20.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:22.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:23 np0005479822 nova_compute[235132]: 2025-10-10 10:24:23.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:25.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:25 np0005479822 nova_compute[235132]: 2025-10-10 10:24:25.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:26.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:28 np0005479822 nova_compute[235132]: 2025-10-10 10:24:28.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:28 np0005479822 podman[256530]: 2025-10-10 10:24:28.962510593 +0000 UTC m=+0.063785894 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:24:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:29.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:30 np0005479822 nova_compute[235132]: 2025-10-10 10:24:30.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:30.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:24:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:32.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:24:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:33 np0005479822 nova_compute[235132]: 2025-10-10 10:24:33.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:35.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:35 np0005479822 nova_compute[235132]: 2025-10-10 10:24:35.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:38 np0005479822 nova_compute[235132]: 2025-10-10 10:24:38.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:24:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:24:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:39.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:40 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:40 np0005479822 nova_compute[235132]: 2025-10-10 10:24:40.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:40.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:24:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:41 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:24:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:41.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:41 np0005479822 podman[256733]: 2025-10-10 10:24:41.785617357 +0000 UTC m=+0.065176582 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 06:24:41 np0005479822 podman[256732]: 2025-10-10 10:24:41.814314712 +0000 UTC m=+0.092499260 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:24:41 np0005479822 podman[256734]: 2025-10-10 10:24:41.814287491 +0000 UTC m=+0.091461402 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:24:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:24:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:24:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:24:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:42.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:43.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:43 np0005479822 nova_compute[235132]: 2025-10-10 10:24:43.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:44.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:24:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:45 np0005479822 nova_compute[235132]: 2025-10-10 10:24:45.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:48 np0005479822 nova_compute[235132]: 2025-10-10 10:24:48.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:50 np0005479822 nova_compute[235132]: 2025-10-10 10:24:50.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:53.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:53 np0005479822 nova_compute[235132]: 2025-10-10 10:24:53.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.201685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894201791, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2424, "num_deletes": 508, "total_data_size": 5032071, "memory_usage": 5106448, "flush_reason": "Manual Compaction"}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894220148, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3255631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34335, "largest_seqno": 36754, "table_properties": {"data_size": 3245062, "index_size": 5975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 29579, "raw_average_key_size": 21, "raw_value_size": 3220417, "raw_average_value_size": 2305, "num_data_blocks": 256, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091750, "oldest_key_time": 1760091750, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 18537 microseconds, and 8825 cpu microseconds.
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.220228) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3255631 bytes OK
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.220268) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222314) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222375) EVENT_LOG_v1 {"time_micros": 1760091894222366, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222401) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 5019287, prev total WAL file size 5019287, number of live WAL files 2.
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.224474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3179KB)], [66(13MB)]
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894224574, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 17070640, "oldest_snapshot_seqno": -1}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6498 keys, 14859050 bytes, temperature: kUnknown
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894310842, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 14859050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14815102, "index_size": 26622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 170822, "raw_average_key_size": 26, "raw_value_size": 14697358, "raw_average_value_size": 2261, "num_data_blocks": 1053, "num_entries": 6498, "num_filter_entries": 6498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.311126) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14859050 bytes
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.312552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 172.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 13.2 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(9.8) write-amplify(4.6) OK, records in: 7531, records dropped: 1033 output_compression: NoCompression
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.312570) EVENT_LOG_v1 {"time_micros": 1760091894312561, "job": 40, "event": "compaction_finished", "compaction_time_micros": 86360, "compaction_time_cpu_micros": 63715, "output_level": 6, "num_output_files": 1, "total_output_size": 14859050, "num_input_records": 7531, "num_output_records": 6498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894313433, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894316563, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.224263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:24:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:24:55 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4030 syncs, 3.37 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2312 writes, 7308 keys, 2312 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 2312 writes, 996 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 06:24:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:55 np0005479822 nova_compute[235132]: 2025-10-10 10:24:55.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:24:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:56.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:24:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:57.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:24:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:24:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:24:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:24:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:24:58 np0005479822 nova_compute[235132]: 2025-10-10 10:24:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:24:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:24:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:24:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:24:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:24:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:24:59 np0005479822 podman[256832]: 2025-10-10 10:24:59.99499259 +0000 UTC m=+0.090199157 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 06:25:00 np0005479822 nova_compute[235132]: 2025-10-10 10:25:00.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:01 np0005479822 nova_compute[235132]: 2025-10-10 10:25:01.371 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:01 np0005479822 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:01 np0005479822 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:01 np0005479822 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:25:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:01.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:02 np0005479822 nova_compute[235132]: 2025-10-10 10:25:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:02 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:03 np0005479822 nova_compute[235132]: 2025-10-10 10:25:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:03 np0005479822 nova_compute[235132]: 2025-10-10 10:25:03.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:04 np0005479822 nova_compute[235132]: 2025-10-10 10:25:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:04 np0005479822 nova_compute[235132]: 2025-10-10 10:25:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:25:04 np0005479822 nova_compute[235132]: 2025-10-10 10:25:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:25:04 np0005479822 nova_compute[235132]: 2025-10-10 10:25:04.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:25:04 np0005479822 nova_compute[235132]: 2025-10-10 10:25:04.067 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:05 np0005479822 nova_compute[235132]: 2025-10-10 10:25:05.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:06.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:08 np0005479822 nova_compute[235132]: 2025-10-10 10:25:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:08 np0005479822 nova_compute[235132]: 2025-10-10 10:25:08.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:08.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:10 np0005479822 nova_compute[235132]: 2025-10-10 10:25:10.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:11.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.077 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.077 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:25:11 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:25:11 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3659408900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.540 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:25:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.740 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4826MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.845 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.846 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:25:11 np0005479822 nova_compute[235132]: 2025-10-10 10:25:11.909 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:25:11 np0005479822 podman[256906]: 2025-10-10 10:25:11.972756686 +0000 UTC m=+0.075923057 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid)
Oct 10 06:25:11 np0005479822 podman[256907]: 2025-10-10 10:25:11.996489354 +0000 UTC m=+0.085205620 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 06:25:12 np0005479822 podman[256908]: 2025-10-10 10:25:12.03983381 +0000 UTC m=+0.132382871 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 10 06:25:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:25:12 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1732526091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:25:12 np0005479822 nova_compute[235132]: 2025-10-10 10:25:12.400 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:25:12 np0005479822 nova_compute[235132]: 2025-10-10 10:25:12.409 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:25:12 np0005479822 nova_compute[235132]: 2025-10-10 10:25:12.431 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:25:12 np0005479822 nova_compute[235132]: 2025-10-10 10:25:12.434 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:25:12 np0005479822 nova_compute[235132]: 2025-10-10 10:25:12.434 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:25:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:13.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:13 np0005479822 nova_compute[235132]: 2025-10-10 10:25:13.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:15 np0005479822 nova_compute[235132]: 2025-10-10 10:25:15.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:17.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:18 np0005479822 nova_compute[235132]: 2025-10-10 10:25:18.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:19.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:20 np0005479822 nova_compute[235132]: 2025-10-10 10:25:20.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:25:22 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6935 writes, 37K keys, 6935 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 6935 writes, 6935 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1550 writes, 8346 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 17.91 MB, 0.03 MB/s#012Interval WAL: 1551 writes, 1551 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    144.7      0.38              0.20        20    0.019       0      0       0.0       0.0#012  L6      1/0   14.17 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    207.4    178.1      1.38              0.83        19    0.072    107K    10K       0.0       0.0#012 Sum      1/0   14.17 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    162.8    170.9      1.75              1.03        39    0.045    107K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    163.0    164.0      0.49              0.32        10    0.049     34K   3591       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    207.4    178.1      1.38              0.83        19    0.072    107K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    145.6      0.37              0.20        19    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.12 MB/s write, 0.28 GB read, 0.12 MB/s read, 1.8 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 26.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000267 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1611,26.00 MB,8.55392%) FilterBlock(39,311.17 KB,0.0999601%) IndexBlock(39,528.27 KB,0.169699%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct 10 06:25:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:23.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:23 np0005479822 nova_compute[235132]: 2025-10-10 10:25:23.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:25 np0005479822 nova_compute[235132]: 2025-10-10 10:25:25.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:28 np0005479822 nova_compute[235132]: 2025-10-10 10:25:28.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:30 np0005479822 nova_compute[235132]: 2025-10-10 10:25:30.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:31 np0005479822 podman[257023]: 2025-10-10 10:25:31.013023094 +0000 UTC m=+0.119217819 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 10 06:25:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:31.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:33 np0005479822 nova_compute[235132]: 2025-10-10 10:25:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:35.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:35 np0005479822 nova_compute[235132]: 2025-10-10 10:25:35.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:38 np0005479822 nova_compute[235132]: 2025-10-10 10:25:38.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:40 np0005479822 nova_compute[235132]: 2025-10-10 10:25:40.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:42 np0005479822 podman[257069]: 2025-10-10 10:25:42.139499158 +0000 UTC m=+0.096554590 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 10 06:25:42 np0005479822 podman[257070]: 2025-10-10 10:25:42.150429867 +0000 UTC m=+0.100950341 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:25:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:25:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:25:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:25:42 np0005479822 podman[257110]: 2025-10-10 10:25:42.273226983 +0000 UTC m=+0.106225614 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 06:25:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:43.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:43 np0005479822 nova_compute[235132]: 2025-10-10 10:25:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:45.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:45 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:45 np0005479822 nova_compute[235132]: 2025-10-10 10:25:45.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:25:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:46 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:25:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:48 np0005479822 nova_compute[235132]: 2025-10-10 10:25:48.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:49.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:50 np0005479822 nova_compute[235132]: 2025-10-10 10:25:50.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:51 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:25:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:53 np0005479822 nova_compute[235132]: 2025-10-10 10:25:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:25:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:25:55 np0005479822 nova_compute[235132]: 2025-10-10 10:25:55.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:25:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:25:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:25:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:25:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:25:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:25:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:25:58 np0005479822 nova_compute[235132]: 2025-10-10 10:25:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:25:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:25:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:25:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:25:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:00 np0005479822 nova_compute[235132]: 2025-10-10 10:26:00.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:01.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:01.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:02 np0005479822 podman[257255]: 2025-10-10 10:26:02.001034766 +0000 UTC m=+0.088190332 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 10 06:26:02 np0005479822 nova_compute[235132]: 2025-10-10 10:26:02.436 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:02 np0005479822 nova_compute[235132]: 2025-10-10 10:26:02.437 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:02 np0005479822 nova_compute[235132]: 2025-10-10 10:26:02.437 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:02 np0005479822 nova_compute[235132]: 2025-10-10 10:26:02.438 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:26:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:03 np0005479822 nova_compute[235132]: 2025-10-10 10:26:03.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:03 np0005479822 nova_compute[235132]: 2025-10-10 10:26:03.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:03.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:26:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:03.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:26:03 np0005479822 nova_compute[235132]: 2025-10-10 10:26:03.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:05 np0005479822 nova_compute[235132]: 2025-10-10 10:26:05.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:05.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:05 np0005479822 nova_compute[235132]: 2025-10-10 10:26:05.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:06 np0005479822 nova_compute[235132]: 2025-10-10 10:26:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:06 np0005479822 nova_compute[235132]: 2025-10-10 10:26:06.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:26:06 np0005479822 nova_compute[235132]: 2025-10-10 10:26:06.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:26:06 np0005479822 nova_compute[235132]: 2025-10-10 10:26:06.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:26:07 np0005479822 nova_compute[235132]: 2025-10-10 10:26:07.060 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:08 np0005479822 nova_compute[235132]: 2025-10-10 10:26:08.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:10 np0005479822 nova_compute[235132]: 2025-10-10 10:26:10.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:10 np0005479822 nova_compute[235132]: 2025-10-10 10:26:10.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:11.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.069 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:26:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:26:12 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/943224587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.530 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:26:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.808 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.811 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4859MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.812 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:26:12 np0005479822 nova_compute[235132]: 2025-10-10 10:26:12.812 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:26:12 np0005479822 podman[257326]: 2025-10-10 10:26:12.962815069 +0000 UTC m=+0.064810682 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 06:26:12 np0005479822 podman[257327]: 2025-10-10 10:26:12.989495578 +0000 UTC m=+0.080372747 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 06:26:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:13 np0005479822 podman[257328]: 2025-10-10 10:26:13.00670995 +0000 UTC m=+0.101005993 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.047 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.047 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:26:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:13.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.153 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.256 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.256 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.294 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.328 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.348 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:26:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:13 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:26:13 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2125983211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.811 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.820 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.840 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.844 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.845 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.846 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.847 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.870 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.871 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.872 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 10 06:26:13 np0005479822 nova_compute[235132]: 2025-10-10 10:26:13.889 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:15 np0005479822 nova_compute[235132]: 2025-10-10 10:26:15.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:26:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:26:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:15.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:26:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:15 np0005479822 nova_compute[235132]: 2025-10-10 10:26:15.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:17.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:18 np0005479822 nova_compute[235132]: 2025-10-10 10:26:18.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:19.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:20 np0005479822 nova_compute[235132]: 2025-10-10 10:26:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:23.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:23 np0005479822 nova_compute[235132]: 2025-10-10 10:26:23.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.592944) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984592971, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1135, "num_deletes": 251, "total_data_size": 2602490, "memory_usage": 2650792, "flush_reason": "Manual Compaction"}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984601618, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1080048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36759, "largest_seqno": 37889, "table_properties": {"data_size": 1076005, "index_size": 1631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10781, "raw_average_key_size": 20, "raw_value_size": 1067267, "raw_average_value_size": 2068, "num_data_blocks": 70, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091895, "oldest_key_time": 1760091895, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8726 microseconds, and 3588 cpu microseconds.
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.601665) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1080048 bytes OK
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.601687) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603422) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603447) EVENT_LOG_v1 {"time_micros": 1760091984603439, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603469) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2596970, prev total WAL file size 2596970, number of live WAL files 2.
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.605085) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1054KB)], [69(14MB)]
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984605163, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 15939098, "oldest_snapshot_seqno": -1}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6535 keys, 12460113 bytes, temperature: kUnknown
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984675849, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12460113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12419575, "index_size": 23082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171757, "raw_average_key_size": 26, "raw_value_size": 12304885, "raw_average_value_size": 1882, "num_data_blocks": 907, "num_entries": 6535, "num_filter_entries": 6535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.676260) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12460113 bytes
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.677820) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.2 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(26.3) write-amplify(11.5) OK, records in: 7014, records dropped: 479 output_compression: NoCompression
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.677901) EVENT_LOG_v1 {"time_micros": 1760091984677882, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70779, "compaction_time_cpu_micros": 45566, "output_level": 6, "num_output_files": 1, "total_output_size": 12460113, "num_input_records": 7014, "num_output_records": 6535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984678543, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984684410, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.604978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:24 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:26:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:26 np0005479822 nova_compute[235132]: 2025-10-10 10:26:26.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:27.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:28 np0005479822 nova_compute[235132]: 2025-10-10 10:26:28.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:29.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:29.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:31 np0005479822 nova_compute[235132]: 2025-10-10 10:26:31.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:31.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:32 np0005479822 podman[257450]: 2025-10-10 10:26:32.983101295 +0000 UTC m=+0.082908818 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 06:26:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:33.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:33.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:33 np0005479822 nova_compute[235132]: 2025-10-10 10:26:33.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:36 np0005479822 nova_compute[235132]: 2025-10-10 10:26:36.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:37.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:38 np0005479822 nova_compute[235132]: 2025-10-10 10:26:38.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:41 np0005479822 nova_compute[235132]: 2025-10-10 10:26:41.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.226 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:26:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:26:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:26:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:43.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:43 np0005479822 nova_compute[235132]: 2025-10-10 10:26:43.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:44 np0005479822 podman[257502]: 2025-10-10 10:26:44.011827927 +0000 UTC m=+0.104327023 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:26:44 np0005479822 podman[257501]: 2025-10-10 10:26:44.025997574 +0000 UTC m=+0.122936650 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:26:44 np0005479822 podman[257503]: 2025-10-10 10:26:44.062453371 +0000 UTC m=+0.146352682 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct 10 06:26:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:45.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:46 np0005479822 nova_compute[235132]: 2025-10-10 10:26:46.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:47.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:47.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:48 np0005479822 nova_compute[235132]: 2025-10-10 10:26:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:49.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:51 np0005479822 nova_compute[235132]: 2025-10-10 10:26:51.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:26:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:26:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:26:52 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:26:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:53 np0005479822 nova_compute[235132]: 2025-10-10 10:26:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:26:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:55.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:56 np0005479822 nova_compute[235132]: 2025-10-10 10:26:56.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:26:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:26:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:57.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:26:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:57.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:26:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:26:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:26:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:26:58 np0005479822 nova_compute[235132]: 2025-10-10 10:26:58.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:26:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:26:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:59.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:26:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:26:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:26:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:59.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:01 np0005479822 nova_compute[235132]: 2025-10-10 10:27:01.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:01 np0005479822 nova_compute[235132]: 2025-10-10 10:27:01.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:01.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:01.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:02 np0005479822 nova_compute[235132]: 2025-10-10 10:27:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:02 np0005479822 nova_compute[235132]: 2025-10-10 10:27:02.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:02 np0005479822 nova_compute[235132]: 2025-10-10 10:27:02.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:27:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:03 np0005479822 nova_compute[235132]: 2025-10-10 10:27:03.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:03 np0005479822 nova_compute[235132]: 2025-10-10 10:27:03.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:03 np0005479822 podman[257706]: 2025-10-10 10:27:03.99829934 +0000 UTC m=+0.093751244 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:27:05 np0005479822 nova_compute[235132]: 2025-10-10 10:27:05.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:05.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:05.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:06 np0005479822 nova_compute[235132]: 2025-10-10 10:27:06.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:06 np0005479822 nova_compute[235132]: 2025-10-10 10:27:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:07.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:08 np0005479822 nova_compute[235132]: 2025-10-10 10:27:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:08 np0005479822 nova_compute[235132]: 2025-10-10 10:27:08.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:27:08 np0005479822 nova_compute[235132]: 2025-10-10 10:27:08.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:27:08 np0005479822 nova_compute[235132]: 2025-10-10 10:27:08.062 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:27:08 np0005479822 nova_compute[235132]: 2025-10-10 10:27:08.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:09.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:11 np0005479822 nova_compute[235132]: 2025-10-10 10:27:11.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:11.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:12 np0005479822 nova_compute[235132]: 2025-10-10 10:27:12.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:13 np0005479822 nova_compute[235132]: 2025-10-10 10:27:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.164 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:27:14 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:27:14 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4181353496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.664 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.873 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4846MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:27:14 np0005479822 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:27:14 np0005479822 podman[257751]: 2025-10-10 10:27:14.947686605 +0000 UTC m=+0.055713443 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 06:27:14 np0005479822 podman[257752]: 2025-10-10 10:27:14.95368773 +0000 UTC m=+0.060046633 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 06:27:15 np0005479822 podman[257753]: 2025-10-10 10:27:15.031498606 +0000 UTC m=+0.134602180 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.127 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.128 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.157 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:27:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:15 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:27:15 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/106958647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.684 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.692 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.743 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.745 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:27:15 np0005479822 nova_compute[235132]: 2025-10-10 10:27:15.745 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:27:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:16 np0005479822 nova_compute[235132]: 2025-10-10 10:27:16.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:27:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:17.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:27:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:17.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:18 np0005479822 nova_compute[235132]: 2025-10-10 10:27:18.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:19.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:21 np0005479822 nova_compute[235132]: 2025-10-10 10:27:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:23.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:23 np0005479822 nova_compute[235132]: 2025-10-10 10:27:23.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:27:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:25.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:27:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:26 np0005479822 nova_compute[235132]: 2025-10-10 10:27:26.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:28 np0005479822 nova_compute[235132]: 2025-10-10 10:27:28.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:29.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:31 np0005479822 nova_compute[235132]: 2025-10-10 10:27:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:31.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:31 np0005479822 nova_compute[235132]: 2025-10-10 10:27:31.315 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:27:31 np0005479822 nova_compute[235132]: 2025-10-10 10:27:31.360 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:27:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:33 np0005479822 nova_compute[235132]: 2025-10-10 10:27:33.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:34 np0005479822 podman[257874]: 2025-10-10 10:27:34.979725732 +0000 UTC m=+0.076827210 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 06:27:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:35.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:36 np0005479822 nova_compute[235132]: 2025-10-10 10:27:36.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:36 np0005479822 nova_compute[235132]: 2025-10-10 10:27:36.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:36 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:36.324 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 10 06:27:36 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:36.326 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 10 06:27:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:37.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:38 np0005479822 nova_compute[235132]: 2025-10-10 10:27:38.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:39.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:41 np0005479822 nova_compute[235132]: 2025-10-10 10:27:41.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:41.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:41.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:27:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:27:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:27:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:43 np0005479822 nova_compute[235132]: 2025-10-10 10:27:43.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:44 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:27:44.328 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 10 06:27:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:45.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:45 np0005479822 podman[257925]: 2025-10-10 10:27:45.983128932 +0000 UTC m=+0.089131207 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:27:45 np0005479822 podman[257926]: 2025-10-10 10:27:45.9885347 +0000 UTC m=+0.073786828 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:27:46 np0005479822 podman[257927]: 2025-10-10 10:27:46.050407361 +0000 UTC m=+0.132039180 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 06:27:46 np0005479822 nova_compute[235132]: 2025-10-10 10:27:46.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:47.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:48 np0005479822 nova_compute[235132]: 2025-10-10 10:27:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:49.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:49.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:51 np0005479822 nova_compute[235132]: 2025-10-10 10:27:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:51.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:53 np0005479822 nova_compute[235132]: 2025-10-10 10:27:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:27:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:55.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:27:56 np0005479822 nova_compute[235132]: 2025-10-10 10:27:56.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:27:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:27:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:27:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:27:57 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:27:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:57.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:27:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:27:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:27:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:27:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:27:58 np0005479822 nova_compute[235132]: 2025-10-10 10:27:58.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:27:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:27:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:27:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:27:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:27:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:01 np0005479822 nova_compute[235132]: 2025-10-10 10:28:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:28:02 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:28:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.748 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:28:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:03 np0005479822 nova_compute[235132]: 2025-10-10 10:28:03.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:28:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:28:05 np0005479822 podman[258127]: 2025-10-10 10:28:05.95544528 +0000 UTC m=+0.056484256 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 06:28:06 np0005479822 nova_compute[235132]: 2025-10-10 10:28:06.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:06 np0005479822 nova_compute[235132]: 2025-10-10 10:28:06.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:07 np0005479822 nova_compute[235132]: 2025-10-10 10:28:07.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:08 np0005479822 nova_compute[235132]: 2025-10-10 10:28:08.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:09.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:10 np0005479822 nova_compute[235132]: 2025-10-10 10:28:10.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:10 np0005479822 nova_compute[235132]: 2025-10-10 10:28:10.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:10 np0005479822 nova_compute[235132]: 2025-10-10 10:28:10.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:28:10 np0005479822 nova_compute[235132]: 2025-10-10 10:28:10.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:28:10 np0005479822 nova_compute[235132]: 2025-10-10 10:28:10.081 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:28:11 np0005479822 nova_compute[235132]: 2025-10-10 10:28:11.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:11.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:13.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:13 np0005479822 nova_compute[235132]: 2025-10-10 10:28:13.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:28:14 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:28:14 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3528792367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.620 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.853 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.855 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4863MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.855 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.856 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.941 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.942 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:28:14 np0005479822 nova_compute[235132]: 2025-10-10 10:28:14.978 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:28:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:15 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:28:15 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813754537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:28:15 np0005479822 nova_compute[235132]: 2025-10-10 10:28:15.449 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:28:15 np0005479822 nova_compute[235132]: 2025-10-10 10:28:15.460 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:28:15 np0005479822 nova_compute[235132]: 2025-10-10 10:28:15.478 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:28:15 np0005479822 nova_compute[235132]: 2025-10-10 10:28:15.481 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:28:15 np0005479822 nova_compute[235132]: 2025-10-10 10:28:15.482 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:28:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:28:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:28:16 np0005479822 nova_compute[235132]: 2025-10-10 10:28:16.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:16 np0005479822 podman[258195]: 2025-10-10 10:28:16.988720848 +0000 UTC m=+0.083882554 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 06:28:16 np0005479822 podman[258196]: 2025-10-10 10:28:16.998296389 +0000 UTC m=+0.086275389 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct 10 06:28:17 np0005479822 podman[258197]: 2025-10-10 10:28:17.038782706 +0000 UTC m=+0.126918480 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:28:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:18 np0005479822 nova_compute[235132]: 2025-10-10 10:28:18.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:21 np0005479822 nova_compute[235132]: 2025-10-10 10:28:21.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:23.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:23 np0005479822 nova_compute[235132]: 2025-10-10 10:28:23.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:25.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:26 np0005479822 nova_compute[235132]: 2025-10-10 10:28:26.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.516413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107516499, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1448, "num_deletes": 251, "total_data_size": 3527461, "memory_usage": 3577536, "flush_reason": "Manual Compaction"}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107535495, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2302543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37894, "largest_seqno": 39337, "table_properties": {"data_size": 2296439, "index_size": 3367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13187, "raw_average_key_size": 20, "raw_value_size": 2284080, "raw_average_value_size": 3465, "num_data_blocks": 147, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091985, "oldest_key_time": 1760091985, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 19297 microseconds, and 11453 cpu microseconds.
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.535719) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2302543 bytes OK
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.535805) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537589) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537616) EVENT_LOG_v1 {"time_micros": 1760092107537607, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537641) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3520674, prev total WAL file size 3520674, number of live WAL files 2.
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.539926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2248KB)], [72(11MB)]
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107539963, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 14762656, "oldest_snapshot_seqno": -1}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6678 keys, 12616581 bytes, temperature: kUnknown
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107607618, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12616581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12574968, "index_size": 23837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175432, "raw_average_key_size": 26, "raw_value_size": 12457514, "raw_average_value_size": 1865, "num_data_blocks": 936, "num_entries": 6678, "num_filter_entries": 6678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.607957) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12616581 bytes
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.609701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.8 rd, 186.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.9) write-amplify(5.5) OK, records in: 7194, records dropped: 516 output_compression: NoCompression
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.609731) EVENT_LOG_v1 {"time_micros": 1760092107609717, "job": 44, "event": "compaction_finished", "compaction_time_micros": 67787, "compaction_time_cpu_micros": 34512, "output_level": 6, "num_output_files": 1, "total_output_size": 12616581, "num_input_records": 7194, "num_output_records": 6678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107611061, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107615637, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.539814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:28:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:28 np0005479822 nova_compute[235132]: 2025-10-10 10:28:28.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:31 np0005479822 nova_compute[235132]: 2025-10-10 10:28:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:33 np0005479822 nova_compute[235132]: 2025-10-10 10:28:33.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:36 np0005479822 nova_compute[235132]: 2025-10-10 10:28:36.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:36 np0005479822 podman[258297]: 2025-10-10 10:28:36.9781788 +0000 UTC m=+0.074876328 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 10 06:28:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:38 np0005479822 nova_compute[235132]: 2025-10-10 10:28:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:39.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:41 np0005479822 nova_compute[235132]: 2025-10-10 10:28:41.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:41.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:28:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.229 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:28:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.229 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:28:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 06:28:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 06:28:43 np0005479822 nova_compute[235132]: 2025-10-10 10:28:43.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:28:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:28:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:46 np0005479822 nova_compute[235132]: 2025-10-10 10:28:46.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:47 np0005479822 podman[258348]: 2025-10-10 10:28:47.956170936 +0000 UTC m=+0.063653641 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:28:47 np0005479822 podman[258349]: 2025-10-10 10:28:47.972597675 +0000 UTC m=+0.070300283 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 06:28:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:48 np0005479822 podman[258350]: 2025-10-10 10:28:48.031265049 +0000 UTC m=+0.118126580 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 06:28:48 np0005479822 nova_compute[235132]: 2025-10-10 10:28:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:49.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:51 np0005479822 nova_compute[235132]: 2025-10-10 10:28:51.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:53 np0005479822 nova_compute[235132]: 2025-10-10 10:28:53.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 06:28:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:53.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 06:28:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:56 np0005479822 nova_compute[235132]: 2025-10-10 10:28:56.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:28:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:28:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:28:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:28:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:28:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:28:58 np0005479822 nova_compute[235132]: 2025-10-10 10:28:58.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:28:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:28:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:28:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:28:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:28:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:01 np0005479822 nova_compute[235132]: 2025-10-10 10:29:01.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:03 np0005479822 nova_compute[235132]: 2025-10-10 10:29:03.483 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:03 np0005479822 nova_compute[235132]: 2025-10-10 10:29:03.483 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:29:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:29:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:29:03 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:29:03 np0005479822 nova_compute[235132]: 2025-10-10 10:29:03.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:03.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:04 np0005479822 nova_compute[235132]: 2025-10-10 10:29:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:04 np0005479822 nova_compute[235132]: 2025-10-10 10:29:04.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:04 np0005479822 nova_compute[235132]: 2025-10-10 10:29:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:29:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:06 np0005479822 nova_compute[235132]: 2025-10-10 10:29:06.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:07 np0005479822 nova_compute[235132]: 2025-10-10 10:29:07.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:07 np0005479822 podman[258531]: 2025-10-10 10:29:07.965765441 +0000 UTC m=+0.070025126 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 06:29:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:08 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:29:08 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:29:08 np0005479822 nova_compute[235132]: 2025-10-10 10:29:08.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:09 np0005479822 nova_compute[235132]: 2025-10-10 10:29:09.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:11 np0005479822 nova_compute[235132]: 2025-10-10 10:29:11.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:11.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:12 np0005479822 nova_compute[235132]: 2025-10-10 10:29:12.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:12 np0005479822 nova_compute[235132]: 2025-10-10 10:29:12.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:29:12 np0005479822 nova_compute[235132]: 2025-10-10 10:29:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:29:12 np0005479822 nova_compute[235132]: 2025-10-10 10:29:12.066 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:29:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:13 np0005479822 nova_compute[235132]: 2025-10-10 10:29:13.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:15 np0005479822 nova_compute[235132]: 2025-10-10 10:29:15.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:15.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:16 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:29:16 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1325988144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.522 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.766 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.767 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4834MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.767 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.768 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.857 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.858 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:29:16 np0005479822 nova_compute[235132]: 2025-10-10 10:29:16.879 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:29:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:29:17 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2046352366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:29:17 np0005479822 nova_compute[235132]: 2025-10-10 10:29:17.397 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:29:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:17 np0005479822 nova_compute[235132]: 2025-10-10 10:29:17.405 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:29:17 np0005479822 nova_compute[235132]: 2025-10-10 10:29:17.424 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:29:17 np0005479822 nova_compute[235132]: 2025-10-10 10:29:17.426 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:29:17 np0005479822 nova_compute[235132]: 2025-10-10 10:29:17.427 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:29:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:18 np0005479822 nova_compute[235132]: 2025-10-10 10:29:18.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:19 np0005479822 podman[258625]: 2025-10-10 10:29:19.026781886 +0000 UTC m=+0.128359239 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:29:19 np0005479822 podman[258626]: 2025-10-10 10:29:19.045784995 +0000 UTC m=+0.142356092 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 06:29:19 np0005479822 podman[258627]: 2025-10-10 10:29:19.056896379 +0000 UTC m=+0.148003247 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 06:29:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:21 np0005479822 nova_compute[235132]: 2025-10-10 10:29:21.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:23.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:23 np0005479822 nova_compute[235132]: 2025-10-10 10:29:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:26 np0005479822 nova_compute[235132]: 2025-10-10 10:29:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:28 np0005479822 nova_compute[235132]: 2025-10-10 10:29:28.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:31 np0005479822 nova_compute[235132]: 2025-10-10 10:29:31.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:31.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 06:29:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 06:29:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:33 np0005479822 nova_compute[235132]: 2025-10-10 10:29:33.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:35.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:36 np0005479822 nova_compute[235132]: 2025-10-10 10:29:36.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:39 np0005479822 nova_compute[235132]: 2025-10-10 10:29:39.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:39 np0005479822 podman[258726]: 2025-10-10 10:29:39.034724988 +0000 UTC m=+0.128819433 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:29:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:39.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:41 np0005479822 nova_compute[235132]: 2025-10-10 10:29:41.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.230 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:29:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:29:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:29:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:43 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:29:43 np0005479822 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 06:29:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:43.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:44 np0005479822 nova_compute[235132]: 2025-10-10 10:29:44.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:45.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:46 np0005479822 nova_compute[235132]: 2025-10-10 10:29:46.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:47.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:49 np0005479822 nova_compute[235132]: 2025-10-10 10:29:49.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:49.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:49 np0005479822 podman[258787]: 2025-10-10 10:29:49.978555061 +0000 UTC m=+0.074331983 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 06:29:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:49.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:50 np0005479822 podman[258788]: 2025-10-10 10:29:50.005654512 +0000 UTC m=+0.092530921 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 06:29:50 np0005479822 podman[258789]: 2025-10-10 10:29:50.018456411 +0000 UTC m=+0.109199066 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:29:51 np0005479822 nova_compute[235132]: 2025-10-10 10:29:51.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:51.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:53.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:54 np0005479822 nova_compute[235132]: 2025-10-10 10:29:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:29:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:29:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:55.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:56 np0005479822 nova_compute[235132]: 2025-10-10 10:29:56.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:57.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:29:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:57.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:29:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:29:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:29:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:29:59 np0005479822 nova_compute[235132]: 2025-10-10 10:29:59.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:29:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:59.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:29:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:29:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:29:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:59.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:00 np0005479822 ceph-mon[79167]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 10 06:30:00 np0005479822 ceph-mon[79167]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 10 06:30:00 np0005479822 ceph-mon[79167]:    daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0 is in error state
Oct 10 06:30:01 np0005479822 nova_compute[235132]: 2025-10-10 10:30:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:01 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:01 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:01.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:01 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:02 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:02 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:01.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:02 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:03 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:03 np0005479822 nova_compute[235132]: 2025-10-10 10:30:03.427 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:03 np0005479822 nova_compute[235132]: 2025-10-10 10:30:03.428 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:03 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:03 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:03 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:04 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:04 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:04 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:04.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:04 np0005479822 nova_compute[235132]: 2025-10-10 10:30:04.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:05 np0005479822 nova_compute[235132]: 2025-10-10 10:30:05.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:05 np0005479822 nova_compute[235132]: 2025-10-10 10:30:05.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:05 np0005479822 nova_compute[235132]: 2025-10-10 10:30:05.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 10 06:30:05 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:05 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:05 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:06 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:06 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:06 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:06.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:06 np0005479822 nova_compute[235132]: 2025-10-10 10:30:06.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:07 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:07 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:07 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:07 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:08 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:08 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:08 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:08 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:08.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:08 np0005479822 podman[259009]: 2025-10-10 10:30:08.997570766 +0000 UTC m=+0.096749706 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 06:30:09 np0005479822 nova_compute[235132]: 2025-10-10 10:30:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:09 np0005479822 nova_compute[235132]: 2025-10-10 10:30:09.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:09 np0005479822 podman[259009]: 2025-10-10 10:30:09.102039992 +0000 UTC m=+0.201218872 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 06:30:09 np0005479822 podman[259045]: 2025-10-10 10:30:09.256626778 +0000 UTC m=+0.077842779 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:30:09 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:09 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:09 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:09 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 06:30:09 np0005479822 podman[259149]: 2025-10-10 10:30:09.690822447 +0000 UTC m=+0.062041216 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:30:09 np0005479822 podman[259149]: 2025-10-10 10:30:09.701926621 +0000 UTC m=+0.073145430 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 06:30:10 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:10 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:10 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:10.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:10 np0005479822 nova_compute[235132]: 2025-10-10 10:30:10.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:10 np0005479822 nova_compute[235132]: 2025-10-10 10:30:10.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:10 np0005479822 podman[259240]: 2025-10-10 10:30:10.158213864 +0000 UTC m=+0.082612350 container exec d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 06:30:10 np0005479822 podman[259240]: 2025-10-10 10:30:10.167866618 +0000 UTC m=+0.092265094 container exec_died d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 06:30:10 np0005479822 podman[259308]: 2025-10-10 10:30:10.464566788 +0000 UTC m=+0.073900660 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:30:10 np0005479822 podman[259308]: 2025-10-10 10:30:10.475921099 +0000 UTC m=+0.085254961 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 06:30:10 np0005479822 podman[259373]: 2025-10-10 10:30:10.728196445 +0000 UTC m=+0.064993328 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Oct 10 06:30:10 np0005479822 podman[259373]: 2025-10-10 10:30:10.744294595 +0000 UTC m=+0.081091468 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., architecture=x86_64)
Oct 10 06:30:11 np0005479822 nova_compute[235132]: 2025-10-10 10:30:11.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:11 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:11 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:11 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:11 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 06:30:12 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:12 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:12 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:12.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:12 np0005479822 nova_compute[235132]: 2025-10-10 10:30:12.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:12 np0005479822 nova_compute[235132]: 2025-10-10 10:30:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 10 06:30:12 np0005479822 nova_compute[235132]: 2025-10-10 10:30:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 10 06:30:12 np0005479822 nova_compute[235132]: 2025-10-10 10:30:12.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:12 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 06:30:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:13 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:13 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:13 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:13 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:14 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:14 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:14 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:14.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:14 np0005479822 nova_compute[235132]: 2025-10-10 10:30:14.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:15 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:15 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:15 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:16 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:16 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:16 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:16 np0005479822 nova_compute[235132]: 2025-10-10 10:30:16.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:16 np0005479822 nova_compute[235132]: 2025-10-10 10:30:16.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:16 np0005479822 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.084 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:30:17 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:17 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:17 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:17.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:30:17 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/862817915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.615 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:30:17 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.806 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4846MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.877 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.877 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 10 06:30:17 np0005479822 nova_compute[235132]: 2025-10-10 10:30:17.893 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 10 06:30:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:18 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:18 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:18 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:18 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:18.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:18 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 06:30:18 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832635326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 06:30:18 np0005479822 nova_compute[235132]: 2025-10-10 10:30:18.398 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 10 06:30:18 np0005479822 nova_compute[235132]: 2025-10-10 10:30:18.407 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 10 06:30:18 np0005479822 nova_compute[235132]: 2025-10-10 10:30:18.433 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 10 06:30:18 np0005479822 nova_compute[235132]: 2025-10-10 10:30:18.436 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 10 06:30:18 np0005479822 nova_compute[235132]: 2025-10-10 10:30:18.436 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:30:19 np0005479822 nova_compute[235132]: 2025-10-10 10:30:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:19 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:19 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:19 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:19.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:20 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:20 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:20 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:21 np0005479822 podman[259563]: 2025-10-10 10:30:21.004258253 +0000 UTC m=+0.095067740 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:30:21 np0005479822 podman[259562]: 2025-10-10 10:30:21.055100086 +0000 UTC m=+0.147909317 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 06:30:21 np0005479822 podman[259564]: 2025-10-10 10:30:21.07541703 +0000 UTC m=+0.162549815 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 10 06:30:21 np0005479822 nova_compute[235132]: 2025-10-10 10:30:21.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:21 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:21 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:21 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:21.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:22 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:22 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:22 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:22.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:22 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:23 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:23 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:23 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:23 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:24 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:24 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:24 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:24 np0005479822 nova_compute[235132]: 2025-10-10 10:30:24.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:25 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:25 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:25 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:26 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:26 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:26 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:26 np0005479822 nova_compute[235132]: 2025-10-10 10:30:26.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:27 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:27 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:27 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:27 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:28 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:28 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:28 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:28 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:28.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:29 np0005479822 nova_compute[235132]: 2025-10-10 10:30:29.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:29 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:29 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:29 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:30 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:30 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:30 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:31 np0005479822 nova_compute[235132]: 2025-10-10 10:30:31.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:31 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:31 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:31 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:32 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:32 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:32 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:32 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:33 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:33 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:33 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:33 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:33 np0005479822 systemd-logind[789]: New session 60 of user zuul.
Oct 10 06:30:34 np0005479822 systemd[1]: Started Session 60 of User zuul.
Oct 10 06:30:34 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:34 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:34 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:34 np0005479822 nova_compute[235132]: 2025-10-10 10:30:34.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:35 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:35 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:35 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:36 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:36 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:36 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:36.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:36 np0005479822 nova_compute[235132]: 2025-10-10 10:30:36.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:37 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:37 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:37 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:37 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 06:30:37 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3130338205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 06:30:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:38 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:38 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:38 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:38 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:39 np0005479822 nova_compute[235132]: 2025-10-10 10:30:39.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:39 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:39 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:39 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:39.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.669378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239669429, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1587, "num_deletes": 258, "total_data_size": 3888679, "memory_usage": 3937488, "flush_reason": "Manual Compaction"}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239681722, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2541098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39342, "largest_seqno": 40924, "table_properties": {"data_size": 2534363, "index_size": 3806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14470, "raw_average_key_size": 20, "raw_value_size": 2520675, "raw_average_value_size": 3486, "num_data_blocks": 164, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760092108, "oldest_key_time": 1760092108, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 12531 microseconds, and 6913 cpu microseconds.
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.681893) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2541098 bytes OK
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.681961) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683481) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683497) EVENT_LOG_v1 {"time_micros": 1760092239683492, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3881260, prev total WAL file size 3881260, number of live WAL files 2.
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.684605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2481KB)], [75(12MB)]
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239684638, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15157679, "oldest_snapshot_seqno": -1}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6867 keys, 14996332 bytes, temperature: kUnknown
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239757252, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 14996332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14951013, "index_size": 27031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 180389, "raw_average_key_size": 26, "raw_value_size": 14827761, "raw_average_value_size": 2159, "num_data_blocks": 1069, "num_entries": 6867, "num_filter_entries": 6867, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.757672) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14996332 bytes
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.758865) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.3 rd, 206.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 7401, records dropped: 534 output_compression: NoCompression
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.758886) EVENT_LOG_v1 {"time_micros": 1760092239758877, "job": 46, "event": "compaction_finished", "compaction_time_micros": 72778, "compaction_time_cpu_micros": 30211, "output_level": 6, "num_output_files": 1, "total_output_size": 14996332, "num_input_records": 7401, "num_output_records": 6867, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239759496, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239761913, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.684548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 06:30:39 np0005479822 podman[259958]: 2025-10-10 10:30:39.98015774 +0000 UTC m=+0.079341089 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 10 06:30:40 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:40 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:40 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:40.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:41 np0005479822 ovs-vsctl[260004]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 06:30:41 np0005479822 nova_compute[235132]: 2025-10-10 10:30:41.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:41 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:41 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:41 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:42 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:42 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:42 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:42.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 10 06:30:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.232 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 10 06:30:42 np0005479822 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.232 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 10 06:30:42 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 06:30:42 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 06:30:42 np0005479822 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 06:30:42 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:43 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: cache status {prefix=cache status} (starting...)
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:43 np0005479822 lvm[260297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 06:30:43 np0005479822 lvm[260297]: VG ceph_vg0 finished
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: client ls {prefix=client ls} (starting...)
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:43 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:43 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:43 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:43.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 06:30:43 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:43 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 06:30:43 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/206671409' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 06:30:44 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:44 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:44 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:44 np0005479822 nova_compute[235132]: 2025-10-10 10:30:44.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:44 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 06:30:44 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2810672141' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:44 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 06:30:44 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250802086' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 06:30:44 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:45 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 06:30:45 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:45 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 06:30:45 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494599266' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 06:30:45 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: ops {prefix=ops} (starting...)
Oct 10 06:30:45 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:45 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:45 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:45 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:45.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:45 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 06:30:45 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3277923492' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3172139523' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 06:30:46 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:46 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:46 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:46.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:46 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: session ls {prefix=session ls} (starting...)
Oct 10 06:30:46 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 06:30:46 np0005479822 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: status {prefix=status} (starting...)
Oct 10 06:30:46 np0005479822 nova_compute[235132]: 2025-10-10 10:30:46.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/547263865' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3113939320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 06:30:46 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2912853044' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3051349397' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858622949' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 06:30:47 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:47 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:47 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:47.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/683696595' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 06:30:47 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/149990470' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 06:30:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:48 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:48 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:48 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:48 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:48.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1680992890' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2469062832' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 06:30:48 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3444631537' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 06:30:49 np0005479822 nova_compute[235132]: 2025-10-10 10:30:49.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:49 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 06:30:49 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/724274365' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 06:30:49 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:49 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:49 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:49.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:49 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 06:30:49 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3730549091' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b098fe1a40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.030792236s of 166.034805298s, submitted: 1
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0987bf4a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b096d5eb40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991829 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242959023s of 11.250681877s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991961 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993473 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993341 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104929924s of 12.169629097s, submitted: 3
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096657800 session 0x55b099008780
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09900a1e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b096c4b0e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b097a14f00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.395177841s of 19.406061172s, submitted: 3
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992882 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.400293350s of 13.415930748s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09900ab40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.019653320s of 10.023086548s, submitted: 1
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.912096024s of 12.919400215s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dcbc00 session 0x55b09586e3c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09900b680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09900b0e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.915779114s of 35.922908783s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995774 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083123207s of 12.089940071s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0990090e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.845153809s of 41.853366852s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b098269000 session 0x55b097a15e00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b0987bf4a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e19a40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.253569603s of 15.263068199s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3260416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.775625229s of 15.790586472s, submitted: 4
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cda40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0987be5a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.357236862s of 20.361238480s, submitted: 1
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000640 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000049 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.517070770s of 14.537599564s, submitted: 4
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905d2c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b096bfb0e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e285a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098e29860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.958301544s of 17.961801529s, submitted: 1
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002614 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060987473s of 12.085634232s, submitted: 5
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002023 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3137536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988dd680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b097a69e00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.974418640s of 97.984451294s, submitted: 3
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001891 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006427 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005836 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.054930687s of 12.073850632s, submitted: 5
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0994fde00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b09900ab40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.182666779s of 16.189365387s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b098fe4000
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069c00 session 0x55b098f794a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2670592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918289185s of 12.100981712s, submitted: 367
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b098fe1680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09905d0e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905c780
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.894775391s of 18.905117035s, submitted: 3
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005575 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.683311462s of 10.696245193s, submitted: 3
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005707 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.102847099s of 12.118186951s, submitted: 4
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b0986805a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09840d2c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.958114624s of 71.965682983s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132945061s of 12.140886307s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b098f8a3c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988dd680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.335838318s of 16.343191147s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2465792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.721952438s of 15.729380608s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069400 session 0x55b098fe0f00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098f9fc20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011972 data_alloc: 218103808 data_used: 282624
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18112512 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 145 ms_handle_reset con 0x55b099069000 session 0x55b0988aeb40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18096128 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 15982592 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069400 session 0x55b098f8be00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.096752167s of 14.256991386s, submitted: 46
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083035469s of 12.092510223s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080052 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b09900bc20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069800 session 0x55b0990083c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b098e26000
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026000 session 0x55b099433680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026400 session 0x55b09722fa40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099027400 session 0x55b098856b40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069000 session 0x55b096c4a3c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.802989960s of 21.816146851s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104862 data_alloc: 218103808 data_used: 7106560
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026000 session 0x55b098fe0960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 11378688 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026400 session 0x55b0982841e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026800 session 0x55b098e19860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099027400 session 0x55b0988cd860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099069000 session 0x55b098f8b2c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdd3000/0x0/0x4ffc00000, data 0x974379/0xa38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 11255808 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb81c000/0x0/0x4ffc00000, data 0xf294c4/0xfee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155058 data_alloc: 218103808 data_used: 7106560
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096e1c960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156283 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 9748480 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188963 data_alloc: 218103808 data_used: 7876608
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.784969330s of 17.929061890s, submitted: 52
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188372 data_alloc: 218103808 data_used: 7876608
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 4276224 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 4218880 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217686 data_alloc: 218103808 data_used: 8949760
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 218103808 data_used: 8953856
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b0991de960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2875392 heap: 107372544 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.675640106s of 26.806079865s, submitted: 44
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b098fe1a40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988ae5a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068400 session 0x55b09722e960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096bfb860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b099432960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe0b40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098e28960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b0988af680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b099433860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc000
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988dcb40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327645302s of 10.456887245s, submitted: 32
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295076 data_alloc: 218103808 data_used: 8974336
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 14508032 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361328 data_alloc: 234881024 data_used: 16445440
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364016 data_alloc: 234881024 data_used: 16445440
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515455246s of 12.531072617s, submitted: 5
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 8232960 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bfa000/0x0/0x4ffc00000, data 0x29a6496/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475602 data_alloc: 234881024 data_used: 17408000
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 7258112 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bb4000/0x0/0x4ffc00000, data 0x29e3496/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466414 data_alloc: 234881024 data_used: 17408000
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 8192000 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086705208s of 10.319118500s, submitted: 125
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b0988afa40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b09638f0e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098f9f680
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207454 data_alloc: 218103808 data_used: 5505024
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0990081e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905c5a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 17301504 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09840d4a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b097a68d20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987bfa40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0993723c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936c960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.987216949s of 32.222537994s, submitted: 83
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b096d5ed20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098681860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098e292c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0970df2c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0970ded20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b0987bd2c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905d860
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09723cb40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987be5a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b098f9e1e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104292352 unmapped: 18956288 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 18948096 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153680 data_alloc: 218103808 data_used: 4112384
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687290192s of 14.744665146s, submitted: 17
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154724 data_alloc: 218103808 data_used: 4239360
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159212 data_alloc: 218103808 data_used: 4243456
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 16105472 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 16080896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 14680064 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213196 data_alloc: 218103808 data_used: 4591616
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378688812s of 13.587653160s, submitted: 76
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098680960
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6400 session 0x55b0994323c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7fe00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.899662018s of 20.906446457s, submitted: 2
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09723cd20
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 18874368 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857a40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.475157738s of 10.608925819s, submitted: 41
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142193 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b096c4a5a0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b0988dcf00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.499574661s of 16.514310837s, submitted: 4
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b096c4af00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096c4a3c0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09936da40
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936cf00
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b0972781e0
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47d000/0x0/0x4ffc00000, data 0xd19496/0xddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173157 data_alloc: 218103808 data_used: 3641344
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09874a780
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 19750912 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 19668992 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.522539139s of 18.680767059s, submitted: 28
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 19152896 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109240320 unmapped: 15065088 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280927 data_alloc: 218103808 data_used: 7426048
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:49 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280795 data_alloc: 218103808 data_used: 7426048
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b098fe4000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098f78f00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7400 session 0x55b09936de00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097278d20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16244736 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b09936d4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 11780096 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.515605927s of 18.667829514s, submitted: 60
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0991dfc20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7800 session 0x55b098e281e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097a68d20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b0994325a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096ddd4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361043 data_alloc: 234881024 data_used: 10899456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0982843c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094000 session 0x55b096d112c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098284960
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b096d7f2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 16498688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420911 data_alloc: 234881024 data_used: 19783680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423343 data_alloc: 234881024 data_used: 20115456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098fe4b40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b099433c20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.933946609s of 17.117507935s, submitted: 45
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 9232384 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454755 data_alloc: 234881024 data_used: 20537344
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464521 data_alloc: 234881024 data_used: 20365312
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b097a154a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0988cd0e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.756012917s of 16.126758575s, submitted: 124
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0987be1e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09586ef00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09638e3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x9784b9/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 21069824 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s#012Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166769 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096dddc20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099008f00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b099009a40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.991001129s of 27.170951843s, submitted: 56
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b097a15e00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096c4b860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096c4a5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4ba40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7f2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094400 session 0x55b0994fde00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.066671371s of 19.101375580s, submitted: 6
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 15245312 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 14950400 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 14770176 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.543247223s of 26.644886017s, submitted: 51
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099009680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f792c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098e29680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094c00 session 0x55b09874fe00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095000 session 0x55b096ddd680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332538 data_alloc: 234881024 data_used: 13742080
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fc5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339966 data_alloc: 234881024 data_used: 14827520
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340574 data_alloc: 234881024 data_used: 14888960
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.697357178s of 16.766319275s, submitted: 23
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 12836864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388676 data_alloc: 234881024 data_used: 15142912
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 10887168 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9345000/0x0/0x4ffc00000, data 0x1e4a4a6/0x1f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393116 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.087865829s of 19.344846725s, submitted: 102
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.535310745s of 12.545021057s, submitted: 2
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0963a4800 session 0x55b099009860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 12345344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840996742s of 10.006482124s, submitted: 55
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 12279808 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391892 data_alloc: 234881024 data_used: 14974976
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09905c3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0988ddc20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b0970df4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 13426688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0994fda40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445354462s of 14.472743034s, submitted: 373
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096ddc3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 17448960 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0987bc000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098681860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b098f794a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f783c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.050231934s of 27.103757858s, submitted: 15
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 24707072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638fe00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0994332c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b099433e00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8000 session 0x55b0994321e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09586ef00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9df6000/0x0/0x4ffc00000, data 0x139f4a6/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263995 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fcd20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 21086208 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336483 data_alloc: 234881024 data_used: 17555456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006184578s of 10.114780426s, submitted: 27
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096c4a3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a14b40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 19595264 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8400 session 0x55b097a69860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0982841e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096d5eb40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dcd20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09638f0e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877738953s of 12.966034889s, submitted: 31
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8800 session 0x55b098f8bc20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096e1da40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098fe41e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 26058752 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0990092c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 26050560 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223186 data_alloc: 218103808 data_used: 7118848
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8c00 session 0x55b09723de00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857e00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638f4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936de00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: mgrc ms_handle_reset ms_handle_reset con 0x55b096656000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: mgrc handle_mgr_configure stats_period=5
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096dcac00 session 0x55b0988cc5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068000 session 0x55b0991deb40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237429 data_alloc: 218103808 data_used: 8724480
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a15680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9000 session 0x55b096e1cd20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.847922325s of 13.942553520s, submitted: 33
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 26181632 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4af00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dd4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098f8b2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9800 session 0x55b097a68780
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952396393s of 23.063278198s, submitted: 31
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x9784bf/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0991df860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09723cb40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0986803c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9c00 session 0x55b098680000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 26148864 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098681860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098680b40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc1e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09874ab40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 26107904 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 26099712 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 25485312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114622464 unmapped: 24895488 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.799320221s of 20.981313705s, submitted: 25
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 20717568 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa1a5000/0x0/0x4ffc00000, data 0xfef508/0x10b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ec9000/0x0/0x4ffc00000, data 0x12cb508/0x1393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320655 data_alloc: 234881024 data_used: 13844480
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 19365888 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.374687195s of 14.491823196s, submitted: 54
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade000 session 0x55b09874a5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b0990094a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f794a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.981967926s of 26.068605423s, submitted: 26
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098284780
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe54a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09723d2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0970df680
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098f8a3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223317 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f8b4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226965 data_alloc: 218103808 data_used: 7639040
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237757 data_alloc: 218103808 data_used: 9252864
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933888435s of 16.003017426s, submitted: 22
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19529728 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307679 data_alloc: 218103808 data_used: 9330688
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9c31000/0x0/0x4ffc00000, data 0x15644f8/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 21463040 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 21446656 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe4780
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936c3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936d860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09638f4a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09900ad20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341734 data_alloc: 234881024 data_used: 10129408
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe5a40
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 19619840 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 19513344 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348526 data_alloc: 234881024 data_used: 10899456
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.355663300s of 20.596637726s, submitted: 92
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x1aa64f8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396772 data_alloc: 234881024 data_used: 12251136
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.976144791s of 18.175519943s, submitted: 91
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391364 data_alloc: 234881024 data_used: 12251136
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe41e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b0988cd0e0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 16351232 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0994fcf00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97ee000/0x0/0x4ffc00000, data 0x15974f8/0x165e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327065 data_alloc: 234881024 data_used: 10133504
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b098e29c20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4b860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.100059509s of 34.283664703s, submitted: 59
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09723cd20
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09723c960
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b096d7f2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b096d7e5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7e3c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313977 data_alloc: 218103808 data_used: 7114752
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a5a0
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b0988cd860
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 23453696 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314282 data_alloc: 218103808 data_used: 7118848
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 23429120 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 19677184 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.678905487s of 18.792392731s, submitted: 24
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 10878976 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497028 data_alloc: 234881024 data_used: 19230720
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,1,1])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496100 data_alloc: 234881024 data_used: 19238912
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c19000/0x0/0x4ffc00000, data 0x216d496/0x2233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b09905c000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.617673874s of 10.932350159s, submitted: 136
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b0994fde00
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098fe4000
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:50 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:50 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:50.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 20815872 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 20987904 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 21602304 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 32645120 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf dump' '{prefix=perf dump}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf schema' '{prefix=perf schema}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4030 syncs, 3.37 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2312 writes, 7308 keys, 2312 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 2312 writes, 996 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 317.334625244s of 317.442413330s, submitted: 37
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122994688 unmapped: 32817152 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235682 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 124215296 unmapped: 31596544 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 30498816 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 30498816 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 30425088 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 30425088 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125411328 unmapped: 30400512 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 30621696 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 30228480 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 06:30:50 np0005479822 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 06:30:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 06:30:50 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3542325360' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 06:30:50 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 06:30:50 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838423744' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 06:30:51 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 06:30:51 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1286859206' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 06:30:51 np0005479822 nova_compute[235132]: 2025-10-10 10:30:51.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:51 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:51 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:51 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:51.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:51 np0005479822 podman[261742]: 2025-10-10 10:30:51.995799332 +0000 UTC m=+0.084725037 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 10 06:30:52 np0005479822 podman[261744]: 2025-10-10 10:30:52.009930798 +0000 UTC m=+0.100857388 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 06:30:52 np0005479822 podman[261743]: 2025-10-10 10:30:52.022126782 +0000 UTC m=+0.110549223 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 06:30:52 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:52 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:52 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:52.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2364379109' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672443346' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 06:30:52 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579466127' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 06:30:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:53 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1182933054' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 06:30:53 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:53 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:53 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:53.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4229550321' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 06:30:53 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369088152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29899441' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 06:30:54 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:54 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:54 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:54.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:54 np0005479822 nova_compute[235132]: 2025-10-10 10:30:54.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1625571292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/84867561' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675716813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 06:30:54 np0005479822 systemd[1]: Starting Hostname Service...
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1162105813' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 06:30:54 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2686277470' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 06:30:54 np0005479822 systemd[1]: Started Hostname Service.
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3951488466' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2884419151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 06:30:55 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:55 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:55 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 06:30:55 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1094418802' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 06:30:56 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:56 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:56 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:56 np0005479822 nova_compute[235132]: 2025-10-10 10:30:56.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3022188802' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:30:57 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:57 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:57 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 06:30:57 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1478406306' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 06:30:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 06:30:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 06:30:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 06:30:58 np0005479822 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 06:30:58 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:58 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 06:30:58 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3424359302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 06:30:58 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1807734272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 06:30:59 np0005479822 nova_compute[235132]: 2025-10-10 10:30:59.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 10 06:30:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 06:30:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 06:30:59 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:30:59 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:30:59 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:30:59 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 06:30:59 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1510796144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 06:31:00 np0005479822 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 06:31:00 np0005479822 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 06:31:00 np0005479822 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:31:00.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 06:31:00 np0005479822 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 06:31:00 np0005479822 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1908662771' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
